00:00:00.001 Started by upstream project "autotest-per-patch" build number 132389 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.073 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.074 The recommended git tool is: git 00:00:00.074 using credential 00000000-0000-0000-0000-000000000002 00:00:00.076 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.127 Fetching changes from the remote Git repository 00:00:00.130 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.198 Using shallow fetch with depth 1 00:00:00.198 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.198 > git --version # timeout=10 00:00:00.274 > git --version # 'git version 2.39.2' 00:00:00.274 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.334 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.334 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.295 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.313 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.331 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.331 > git config core.sparsecheckout # timeout=10 00:00:05.347 > git read-tree -mu HEAD # timeout=10 00:00:05.366 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.388 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.388 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.484 [Pipeline] Start of Pipeline 00:00:05.499 [Pipeline] library 00:00:05.501 Loading library shm_lib@master 00:00:05.501 Library shm_lib@master is cached. Copying from home. 00:00:05.516 [Pipeline] node 00:00:05.523 Running on VM-host-SM4 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.526 [Pipeline] { 00:00:05.536 [Pipeline] catchError 00:00:05.537 [Pipeline] { 00:00:05.549 [Pipeline] wrap 00:00:05.555 [Pipeline] { 00:00:05.562 [Pipeline] stage 00:00:05.563 [Pipeline] { (Prologue) 00:00:05.575 [Pipeline] echo 00:00:05.576 Node: VM-host-SM4 00:00:05.581 [Pipeline] cleanWs 00:00:05.589 [WS-CLEANUP] Deleting project workspace... 00:00:05.589 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.595 [WS-CLEANUP] done 00:00:05.861 [Pipeline] setCustomBuildProperty 00:00:05.938 [Pipeline] httpRequest 00:00:06.244 [Pipeline] echo 00:00:06.245 Sorcerer 10.211.164.20 is alive 00:00:06.253 [Pipeline] retry 00:00:06.255 [Pipeline] { 00:00:06.266 [Pipeline] httpRequest 00:00:06.270 HttpMethod: GET 00:00:06.271 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.271 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.278 Response Code: HTTP/1.1 200 OK 00:00:06.278 Success: Status code 200 is in the accepted range: 200,404 00:00:06.279 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.939 [Pipeline] } 00:00:16.952 [Pipeline] // retry 00:00:16.957 [Pipeline] sh 00:00:17.254 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.268 [Pipeline] httpRequest 00:00:17.999 [Pipeline] echo 00:00:18.001 Sorcerer 10.211.164.20 is alive 00:00:18.010 [Pipeline] retry 00:00:18.011 [Pipeline] { 00:00:18.024 [Pipeline] httpRequest 00:00:18.028 HttpMethod: GET 00:00:18.029 URL: http://10.211.164.20/packages/spdk_f86091626013397dd00388458c6a665e61aa5e6d.tar.gz 00:00:18.030 Sending request to url: http://10.211.164.20/packages/spdk_f86091626013397dd00388458c6a665e61aa5e6d.tar.gz 00:00:18.033 Response Code: HTTP/1.1 200 OK 00:00:18.035 Success: Status code 200 is in the accepted range: 200,404 00:00:18.035 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_f86091626013397dd00388458c6a665e61aa5e6d.tar.gz 00:03:11.497 [Pipeline] } 00:03:11.515 [Pipeline] // retry 00:03:11.523 [Pipeline] sh 00:03:11.803 + tar --no-same-owner -xf spdk_f86091626013397dd00388458c6a665e61aa5e6d.tar.gz 00:03:15.099 [Pipeline] sh 00:03:15.381 + git -C spdk log --oneline -n5 00:03:15.382 f86091626 dif: Rename internal generate/verify_copy() by insert/strip_copy() 00:03:15.382 0383e688b bdev/nvme: Fix race between reset and qpair creation/deletion 00:03:15.382 a5dab6cf7 test/nvme/xnvme: Make sure nvme selected for tests is not used 00:03:15.382 876509865 test/nvme/xnvme: Test all conserve_cpu variants 00:03:15.382 a25b16198 test/nvme/xnvme: Enable polling in nvme driver 00:03:15.401 [Pipeline] writeFile 00:03:15.418 [Pipeline] sh 00:03:15.698 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:15.709 [Pipeline] sh 00:03:16.041 + cat autorun-spdk.conf 00:03:16.041 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:16.041 SPDK_TEST_NVME=1 00:03:16.041 SPDK_TEST_FTL=1 00:03:16.041 SPDK_TEST_ISAL=1 00:03:16.041 SPDK_RUN_ASAN=1 00:03:16.041 SPDK_RUN_UBSAN=1 00:03:16.041 SPDK_TEST_XNVME=1 00:03:16.041 SPDK_TEST_NVME_FDP=1 00:03:16.041 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:16.047 RUN_NIGHTLY=0 00:03:16.049 [Pipeline] } 00:03:16.063 [Pipeline] // stage 00:03:16.085 [Pipeline] stage 00:03:16.088 [Pipeline] { (Run VM) 00:03:16.101 [Pipeline] sh 00:03:16.380 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:16.380 + echo 'Start stage prepare_nvme.sh' 00:03:16.380 Start stage prepare_nvme.sh 00:03:16.380 + [[ -n 6 ]] 00:03:16.380 + disk_prefix=ex6 00:03:16.380 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:03:16.380 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:03:16.380 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:03:16.380 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:16.380 ++ SPDK_TEST_NVME=1 00:03:16.380 ++ SPDK_TEST_FTL=1 00:03:16.380 ++ SPDK_TEST_ISAL=1 00:03:16.380 ++ SPDK_RUN_ASAN=1 00:03:16.380 ++ SPDK_RUN_UBSAN=1 00:03:16.380 ++ SPDK_TEST_XNVME=1 00:03:16.380 ++ SPDK_TEST_NVME_FDP=1 00:03:16.380 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:16.380 ++ RUN_NIGHTLY=0 00:03:16.380 + cd /var/jenkins/workspace/nvme-vg-autotest 00:03:16.380 + nvme_files=() 00:03:16.380 + declare -A nvme_files 00:03:16.380 + backend_dir=/var/lib/libvirt/images/backends 00:03:16.380 + nvme_files['nvme.img']=5G 00:03:16.380 + nvme_files['nvme-cmb.img']=5G 00:03:16.380 + nvme_files['nvme-multi0.img']=4G 00:03:16.380 + nvme_files['nvme-multi1.img']=4G 00:03:16.381 + nvme_files['nvme-multi2.img']=4G 00:03:16.381 + nvme_files['nvme-openstack.img']=8G 00:03:16.381 + nvme_files['nvme-zns.img']=5G 00:03:16.381 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:16.381 + (( SPDK_TEST_FTL == 1 )) 00:03:16.381 + nvme_files["nvme-ftl.img"]=6G 00:03:16.381 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:16.381 + nvme_files["nvme-fdp.img"]=1G 00:03:16.381 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:16.381 + for nvme in "${!nvme_files[@]}" 00:03:16.381 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:03:16.381 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:16.381 + for nvme in "${!nvme_files[@]}" 00:03:16.381 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:03:16.381 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:03:16.381 + for nvme in "${!nvme_files[@]}" 00:03:16.381 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:03:16.381 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:16.381 + for nvme in "${!nvme_files[@]}" 00:03:16.381 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:03:16.381 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:16.381 + for nvme in "${!nvme_files[@]}" 00:03:16.381 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:03:16.639 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:16.639 + for nvme in "${!nvme_files[@]}" 00:03:16.639 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:03:16.639 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:16.639 + for nvme in "${!nvme_files[@]}" 00:03:16.639 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:03:16.639 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:16.639 + for nvme in "${!nvme_files[@]}" 00:03:16.639 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:03:16.639 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:03:16.639 + for nvme in "${!nvme_files[@]}" 00:03:16.639 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:03:16.897 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:16.897 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:03:16.897 + echo 'End stage prepare_nvme.sh' 00:03:16.897 End stage prepare_nvme.sh 00:03:16.906 [Pipeline] sh 00:03:17.187 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:17.187 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:03:17.187 00:03:17.187 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:03:17.187 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:03:17.187 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:03:17.187 HELP=0 00:03:17.187 DRY_RUN=0 00:03:17.187 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:03:17.187 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:03:17.187 NVME_AUTO_CREATE=0 00:03:17.187 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:03:17.187 NVME_CMB=,,,, 00:03:17.187 NVME_PMR=,,,, 00:03:17.187 NVME_ZNS=,,,, 00:03:17.187 NVME_MS=true,,,, 00:03:17.187 NVME_FDP=,,,on, 00:03:17.187 SPDK_VAGRANT_DISTRO=fedora39 00:03:17.187 SPDK_VAGRANT_VMCPU=10 00:03:17.187 SPDK_VAGRANT_VMRAM=12288 00:03:17.187 SPDK_VAGRANT_PROVIDER=libvirt 00:03:17.187 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:17.187 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:17.187 SPDK_OPENSTACK_NETWORK=0 00:03:17.187 VAGRANT_PACKAGE_BOX=0 00:03:17.187 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:17.187 FORCE_DISTRO=true 00:03:17.187 VAGRANT_BOX_VERSION= 00:03:17.187 EXTRA_VAGRANTFILES= 00:03:17.187 NIC_MODEL=e1000 00:03:17.187 00:03:17.187 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:03:17.445 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:03:20.727 Bringing machine 'default' up with 'libvirt' provider... 00:03:20.727 ==> default: Creating image (snapshot of base box volume). 00:03:20.999 ==> default: Creating domain with the following settings... 00:03:20.999 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732101446_9c632f88783ae9d9d225 00:03:20.999 ==> default: -- Domain type: kvm 00:03:20.999 ==> default: -- Cpus: 10 00:03:20.999 ==> default: -- Feature: acpi 00:03:20.999 ==> default: -- Feature: apic 00:03:20.999 ==> default: -- Feature: pae 00:03:20.999 ==> default: -- Memory: 12288M 00:03:20.999 ==> default: -- Memory Backing: hugepages: 00:03:20.999 ==> default: -- Management MAC: 00:03:20.999 ==> default: -- Loader: 00:03:20.999 ==> default: -- Nvram: 00:03:20.999 ==> default: -- Base box: spdk/fedora39 00:03:20.999 ==> default: -- Storage pool: default 00:03:21.000 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732101446_9c632f88783ae9d9d225.img (20G) 00:03:21.000 ==> default: -- Volume Cache: default 00:03:21.000 ==> default: -- Kernel: 00:03:21.000 ==> default: -- Initrd: 00:03:21.000 ==> default: -- Graphics Type: vnc 00:03:21.000 ==> default: -- Graphics Port: -1 00:03:21.000 ==> default: -- Graphics IP: 127.0.0.1 00:03:21.000 ==> default: -- Graphics Password: Not defined 00:03:21.000 ==> default: -- Video Type: cirrus 00:03:21.000 ==> default: -- Video VRAM: 9216 00:03:21.000 ==> default: -- Sound Type: 00:03:21.000 ==> default: -- Keymap: en-us 00:03:21.000 ==> default: -- TPM Path: 00:03:21.000 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:21.000 ==> default: -- Command line args: 00:03:21.000 ==> default: -> value=-device, 00:03:21.000 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:21.000 ==> default: -> value=-drive, 00:03:21.000 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:03:21.000 ==> default: -> value=-device, 00:03:21.000 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:03:21.000 ==> default: -> value=-device, 00:03:21.000 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:21.000 ==> default: -> value=-drive, 00:03:21.000 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:03:21.000 ==> default: -> value=-device, 00:03:21.000 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:21.000 ==> default: -> value=-device, 00:03:21.000 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:03:21.000 ==> default: -> value=-drive, 00:03:21.000 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:03:21.000 ==> default: -> value=-device, 00:03:21.000 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:21.000 ==> default: -> value=-drive, 00:03:21.000 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:03:21.000 ==> default: -> value=-device, 00:03:21.000 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:21.000 ==> default: -> value=-drive, 00:03:21.000 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:03:21.000 ==> default: -> value=-device, 00:03:21.000 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:21.000 ==> default: -> value=-device, 00:03:21.000 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:03:21.000 ==> default: -> value=-device, 00:03:21.000 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:03:21.000 ==> default: -> value=-drive, 00:03:21.000 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:03:21.000 ==> default: -> value=-device, 00:03:21.000 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:21.266 ==> default: Creating shared folders metadata... 00:03:21.266 ==> default: Starting domain. 00:03:23.798 ==> default: Waiting for domain to get an IP address... 00:03:38.673 ==> default: Waiting for SSH to become available... 00:03:40.051 ==> default: Configuring and enabling network interfaces... 00:03:45.321 default: SSH address: 192.168.121.174:22 00:03:45.321 default: SSH username: vagrant 00:03:45.321 default: SSH auth method: private key 00:03:47.294 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:55.407 ==> default: Mounting SSHFS shared folder... 00:03:57.308 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:57.308 ==> default: Checking Mount.. 00:03:58.682 ==> default: Folder Successfully Mounted! 00:03:58.682 ==> default: Running provisioner: file... 00:03:59.618 default: ~/.gitconfig => .gitconfig 00:03:59.877 00:03:59.877 SUCCESS! 00:03:59.877 00:03:59.877 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:03:59.877 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:59.877 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:03:59.877 00:03:59.927 [Pipeline] } 00:03:59.944 [Pipeline] // stage 00:03:59.954 [Pipeline] dir 00:03:59.955 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:03:59.956 [Pipeline] { 00:03:59.971 [Pipeline] catchError 00:03:59.973 [Pipeline] { 00:03:59.985 [Pipeline] sh 00:04:00.261 + vagrant ssh-config --host vagrant 00:04:00.261 + sed -ne /^Host/,$p 00:04:00.261 + tee ssh_conf 00:04:04.457 Host vagrant 00:04:04.457 HostName 192.168.121.174 00:04:04.457 User vagrant 00:04:04.457 Port 22 00:04:04.457 UserKnownHostsFile /dev/null 00:04:04.457 StrictHostKeyChecking no 00:04:04.457 PasswordAuthentication no 00:04:04.457 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:04.457 IdentitiesOnly yes 00:04:04.457 LogLevel FATAL 00:04:04.457 ForwardAgent yes 00:04:04.457 ForwardX11 yes 00:04:04.457 00:04:04.470 [Pipeline] withEnv 00:04:04.472 [Pipeline] { 00:04:04.485 [Pipeline] sh 00:04:04.764 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:04.764 source /etc/os-release 00:04:04.764 [[ -e /image.version ]] && img=$(< /image.version) 00:04:04.764 # Minimal, systemd-like check. 00:04:04.764 if [[ -e /.dockerenv ]]; then 00:04:04.764 # Clear garbage from the node's name: 00:04:04.764 # agt-er_autotest_547-896 -> autotest_547-896 00:04:04.764 # $HOSTNAME is the actual container id 00:04:04.764 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:04.764 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:04.764 # We can assume this is a mount from a host where container is running, 00:04:04.764 # so fetch its hostname to easily identify the target swarm worker. 00:04:04.764 container="$(< /etc/hostname) ($agent)" 00:04:04.764 else 00:04:04.764 # Fallback 00:04:04.764 container=$agent 00:04:04.764 fi 00:04:04.764 fi 00:04:04.764 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:04.764 00:04:05.033 [Pipeline] } 00:04:05.049 [Pipeline] // withEnv 00:04:05.058 [Pipeline] setCustomBuildProperty 00:04:05.073 [Pipeline] stage 00:04:05.075 [Pipeline] { (Tests) 00:04:05.092 [Pipeline] sh 00:04:05.374 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:05.646 [Pipeline] sh 00:04:05.929 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:06.201 [Pipeline] timeout 00:04:06.201 Timeout set to expire in 50 min 00:04:06.203 [Pipeline] { 00:04:06.217 [Pipeline] sh 00:04:06.500 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:07.068 HEAD is now at f86091626 dif: Rename internal generate/verify_copy() by insert/strip_copy() 00:04:07.080 [Pipeline] sh 00:04:07.360 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:07.632 [Pipeline] sh 00:04:07.928 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:08.199 [Pipeline] sh 00:04:08.484 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:04:08.742 ++ readlink -f spdk_repo 00:04:08.742 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:08.742 + [[ -n /home/vagrant/spdk_repo ]] 00:04:08.742 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:08.742 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:08.742 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:08.742 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:08.742 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:08.742 + [[ nvme-vg-autotest == pkgdep-* ]] 00:04:08.742 + cd /home/vagrant/spdk_repo 00:04:08.742 + source /etc/os-release 00:04:08.742 ++ NAME='Fedora Linux' 00:04:08.742 ++ VERSION='39 (Cloud Edition)' 00:04:08.742 ++ ID=fedora 00:04:08.742 ++ VERSION_ID=39 00:04:08.742 ++ VERSION_CODENAME= 00:04:08.742 ++ PLATFORM_ID=platform:f39 00:04:08.742 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:08.742 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:08.742 ++ LOGO=fedora-logo-icon 00:04:08.742 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:08.742 ++ HOME_URL=https://fedoraproject.org/ 00:04:08.742 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:08.743 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:08.743 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:08.743 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:08.743 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:08.743 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:08.743 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:08.743 ++ SUPPORT_END=2024-11-12 00:04:08.743 ++ VARIANT='Cloud Edition' 00:04:08.743 ++ VARIANT_ID=cloud 00:04:08.743 + uname -a 00:04:08.743 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:08.743 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:09.001 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.258 Hugepages 00:04:09.259 node hugesize free / total 00:04:09.259 node0 1048576kB 0 / 0 00:04:09.517 node0 2048kB 0 / 0 00:04:09.517 00:04:09.517 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:09.517 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:09.517 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:09.517 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:09.517 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:09.517 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:09.517 + rm -f /tmp/spdk-ld-path 00:04:09.517 + source autorun-spdk.conf 00:04:09.517 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:09.517 ++ SPDK_TEST_NVME=1 00:04:09.517 ++ SPDK_TEST_FTL=1 00:04:09.517 ++ SPDK_TEST_ISAL=1 00:04:09.517 ++ SPDK_RUN_ASAN=1 00:04:09.517 ++ SPDK_RUN_UBSAN=1 00:04:09.517 ++ SPDK_TEST_XNVME=1 00:04:09.517 ++ SPDK_TEST_NVME_FDP=1 00:04:09.517 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:09.517 ++ RUN_NIGHTLY=0 00:04:09.517 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:09.517 + [[ -n '' ]] 00:04:09.517 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:09.517 + for M in /var/spdk/build-*-manifest.txt 00:04:09.517 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:09.517 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:09.517 + for M in /var/spdk/build-*-manifest.txt 00:04:09.517 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:09.517 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:09.517 + for M in /var/spdk/build-*-manifest.txt 00:04:09.517 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:09.517 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:09.517 ++ uname 00:04:09.517 + [[ Linux == \L\i\n\u\x ]] 00:04:09.517 + sudo dmesg -T 00:04:09.517 + sudo dmesg --clear 00:04:09.778 + dmesg_pid=5301 00:04:09.778 + [[ Fedora Linux == FreeBSD ]] 00:04:09.778 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:09.778 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:09.778 + sudo dmesg -Tw 00:04:09.778 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:09.778 + [[ -x /usr/src/fio-static/fio ]] 00:04:09.778 + export FIO_BIN=/usr/src/fio-static/fio 00:04:09.778 + FIO_BIN=/usr/src/fio-static/fio 00:04:09.778 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:09.778 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:09.778 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:09.778 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:09.778 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:09.778 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:09.778 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:09.778 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:09.778 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:09.778 11:18:15 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:09.778 11:18:15 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:09.778 11:18:15 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:09.778 11:18:15 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:04:09.778 11:18:15 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:04:09.778 11:18:15 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:04:09.778 11:18:15 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:04:09.778 11:18:15 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:04:09.778 11:18:15 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:04:09.778 11:18:15 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:04:09.778 11:18:15 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:09.778 11:18:15 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:04:09.778 11:18:15 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:09.778 11:18:15 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:09.778 11:18:15 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:09.778 11:18:15 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:09.778 11:18:15 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:09.778 11:18:15 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:09.778 11:18:15 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:09.778 11:18:15 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:09.778 11:18:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.778 11:18:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.778 11:18:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.778 11:18:15 -- paths/export.sh@5 -- $ export PATH 00:04:09.778 11:18:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.778 11:18:15 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:09.778 11:18:15 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:09.778 11:18:15 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732101495.XXXXXX 00:04:09.778 11:18:15 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732101495.tXHtxg 00:04:09.778 11:18:15 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:09.778 11:18:15 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:09.778 11:18:15 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:09.778 11:18:15 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:09.778 11:18:15 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:09.778 11:18:15 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:09.778 11:18:15 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:09.778 11:18:15 -- common/autotest_common.sh@10 -- $ set +x 00:04:09.778 11:18:15 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:04:09.778 11:18:15 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:09.778 11:18:15 -- pm/common@17 -- $ local monitor 00:04:09.778 11:18:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.778 11:18:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:09.778 11:18:15 -- pm/common@25 -- $ sleep 1 00:04:09.778 11:18:15 -- pm/common@21 -- $ date +%s 00:04:09.779 11:18:15 -- pm/common@21 -- $ date +%s 00:04:09.779 11:18:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732101495 00:04:09.779 11:18:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732101495 00:04:09.779 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732101495_collect-vmstat.pm.log 00:04:09.779 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732101495_collect-cpu-load.pm.log 00:04:11.157 11:18:16 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:11.157 11:18:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:11.157 11:18:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:11.157 11:18:16 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:11.157 11:18:16 -- spdk/autobuild.sh@16 -- $ date -u 00:04:11.157 Wed Nov 20 11:18:16 AM UTC 2024 00:04:11.157 11:18:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:11.157 v25.01-pre-214-gf86091626 00:04:11.157 11:18:16 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:11.157 11:18:16 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:11.157 11:18:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:11.157 11:18:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:11.157 11:18:16 -- common/autotest_common.sh@10 -- $ set +x 00:04:11.157 ************************************ 00:04:11.157 START TEST asan 00:04:11.157 ************************************ 00:04:11.157 using asan 00:04:11.157 11:18:16 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:04:11.157 00:04:11.157 real 0m0.000s 00:04:11.157 user 0m0.000s 00:04:11.157 sys 0m0.000s 00:04:11.157 11:18:16 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:11.157 11:18:16 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:11.157 ************************************ 00:04:11.157 END TEST asan 00:04:11.157 ************************************ 00:04:11.157 11:18:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:11.157 11:18:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:11.157 11:18:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:11.157 11:18:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:11.157 11:18:16 -- common/autotest_common.sh@10 -- $ set +x 00:04:11.157 ************************************ 00:04:11.157 START TEST ubsan 00:04:11.157 ************************************ 00:04:11.157 using ubsan 00:04:11.157 11:18:16 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:11.157 00:04:11.157 real 0m0.000s 00:04:11.157 user 0m0.000s 00:04:11.157 sys 0m0.000s 00:04:11.157 11:18:16 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:11.157 ************************************ 00:04:11.157 END TEST ubsan 00:04:11.157 ************************************ 00:04:11.157 11:18:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:11.157 11:18:16 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:11.157 11:18:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:11.157 11:18:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:11.157 11:18:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:11.157 11:18:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:11.157 11:18:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:11.157 11:18:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:11.157 11:18:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:11.157 11:18:16 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:04:11.157 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:11.157 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:11.724 Using 'verbs' RDMA provider 00:04:27.530 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:42.516 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:42.516 Creating mk/config.mk...done. 00:04:42.516 Creating mk/cc.flags.mk...done. 00:04:42.516 Type 'make' to build. 00:04:42.516 11:18:47 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:42.516 11:18:47 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:42.516 11:18:47 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:42.516 11:18:47 -- common/autotest_common.sh@10 -- $ set +x 00:04:42.516 ************************************ 00:04:42.516 START TEST make 00:04:42.516 ************************************ 00:04:42.516 11:18:47 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:42.516 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:42.516 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:42.516 meson setup builddir \ 00:04:42.516 -Dwith-libaio=enabled \ 00:04:42.516 -Dwith-liburing=enabled \ 00:04:42.516 -Dwith-libvfn=disabled \ 00:04:42.516 -Dwith-spdk=disabled \ 00:04:42.516 -Dexamples=false \ 00:04:42.516 -Dtests=false \ 00:04:42.516 -Dtools=false && \ 00:04:42.516 meson compile -C builddir && \ 00:04:42.516 cd -) 00:04:42.516 make[1]: Nothing to be done for 'all'. 00:04:45.052 The Meson build system 00:04:45.052 Version: 1.5.0 00:04:45.052 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:45.052 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:45.052 Build type: native build 00:04:45.052 Project name: xnvme 00:04:45.052 Project version: 0.7.5 00:04:45.052 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:45.052 C linker for the host machine: cc ld.bfd 2.40-14 00:04:45.052 Host machine cpu family: x86_64 00:04:45.052 Host machine cpu: x86_64 00:04:45.052 Message: host_machine.system: linux 00:04:45.052 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:45.052 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:45.052 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:45.052 Run-time dependency threads found: YES 00:04:45.052 Has header "setupapi.h" : NO 00:04:45.052 Has header "linux/blkzoned.h" : YES 00:04:45.052 Has header "linux/blkzoned.h" : YES (cached) 00:04:45.052 Has header "libaio.h" : YES 00:04:45.052 Library aio found: YES 00:04:45.052 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:45.052 Run-time dependency liburing found: YES 2.2 00:04:45.052 Dependency libvfn skipped: feature with-libvfn disabled 00:04:45.052 Found CMake: /usr/bin/cmake (3.27.7) 00:04:45.052 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:04:45.052 Subproject spdk : skipped: feature with-spdk disabled 00:04:45.052 Run-time dependency appleframeworks found: NO (tried framework) 00:04:45.052 Run-time dependency appleframeworks found: NO (tried framework) 00:04:45.052 Library rt found: YES 00:04:45.052 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:45.052 Configuring xnvme_config.h using configuration 00:04:45.052 Configuring xnvme.spec using configuration 00:04:45.052 Run-time dependency bash-completion found: YES 2.11 00:04:45.052 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:45.052 Program cp found: YES (/usr/bin/cp) 00:04:45.052 Build targets in project: 3 00:04:45.052 00:04:45.052 xnvme 0.7.5 00:04:45.052 00:04:45.052 Subprojects 00:04:45.052 spdk : NO Feature 'with-spdk' disabled 00:04:45.052 00:04:45.052 User defined options 00:04:45.052 examples : false 00:04:45.052 tests : false 00:04:45.052 tools : false 00:04:45.052 with-libaio : enabled 00:04:45.052 with-liburing: enabled 00:04:45.052 with-libvfn : disabled 00:04:45.052 with-spdk : disabled 00:04:45.052 00:04:45.052 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:45.311 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:45.311 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:04:45.311 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:04:45.311 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:04:45.311 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:04:45.311 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:04:45.311 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:04:45.311 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:04:45.311 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:04:45.569 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:04:45.569 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:04:45.569 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:04:45.569 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:04:45.569 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:04:45.569 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:04:45.569 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:04:45.569 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:04:45.569 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:04:45.569 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:04:45.569 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:04:45.569 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:04:45.569 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:04:45.569 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:04:45.569 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:04:45.569 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:04:45.569 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:04:45.828 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:04:45.828 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:04:45.828 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:04:45.828 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:04:45.828 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:04:45.828 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:04:45.828 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:04:45.828 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:04:45.828 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:04:45.828 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:04:45.828 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:04:45.828 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:04:45.828 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:04:45.828 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:04:45.828 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:04:45.828 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:04:45.828 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:04:45.828 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:04:45.828 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:04:45.828 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:04:45.828 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:04:45.828 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:04:45.828 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:04:45.828 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:04:45.828 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:04:45.828 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:04:45.828 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:04:45.828 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:04:46.086 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:04:46.086 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:04:46.086 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:04:46.086 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:04:46.086 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:04:46.086 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:04:46.086 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:04:46.086 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:04:46.086 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:04:46.086 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:04:46.086 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:04:46.086 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:04:46.086 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:04:46.344 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:04:46.344 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:04:46.344 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:04:46.344 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:04:46.344 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:04:46.344 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:04:46.344 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:04:46.970 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:04:46.970 [75/76] Linking static target lib/libxnvme.a 00:04:46.970 [76/76] Linking target lib/libxnvme.so.0.7.5 00:04:46.970 INFO: autodetecting backend as ninja 00:04:46.970 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:46.970 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:56.981 The Meson build system 00:04:56.981 Version: 1.5.0 00:04:56.981 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:56.981 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:56.981 Build type: native build 00:04:56.981 Program cat found: YES (/usr/bin/cat) 00:04:56.981 Project name: DPDK 00:04:56.981 Project version: 24.03.0 00:04:56.981 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:56.981 C linker for the host machine: cc ld.bfd 2.40-14 00:04:56.981 Host machine cpu family: x86_64 00:04:56.981 Host machine cpu: x86_64 00:04:56.981 Message: ## Building in Developer Mode ## 00:04:56.981 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:56.981 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:56.981 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:56.981 Program python3 found: YES (/usr/bin/python3) 00:04:56.981 Program cat found: YES (/usr/bin/cat) 00:04:56.981 Compiler for C supports arguments -march=native: YES 00:04:56.981 Checking for size of "void *" : 8 00:04:56.981 Checking for size of "void *" : 8 (cached) 00:04:56.981 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:56.981 Library m found: YES 00:04:56.981 Library numa found: YES 00:04:56.981 Has header "numaif.h" : YES 00:04:56.981 Library fdt found: NO 00:04:56.981 Library execinfo found: NO 00:04:56.981 Has header "execinfo.h" : YES 00:04:56.981 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:56.981 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:56.981 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:56.981 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:56.981 Run-time dependency openssl found: YES 3.1.1 00:04:56.981 Run-time dependency libpcap found: YES 1.10.4 00:04:56.981 Has header "pcap.h" with dependency libpcap: YES 00:04:56.981 Compiler for C supports arguments -Wcast-qual: YES 00:04:56.981 Compiler for C supports arguments -Wdeprecated: YES 00:04:56.981 Compiler for C supports arguments -Wformat: YES 00:04:56.981 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:56.981 Compiler for C supports arguments -Wformat-security: NO 00:04:56.981 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:56.981 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:56.981 Compiler for C supports arguments -Wnested-externs: YES 00:04:56.981 Compiler for C supports arguments -Wold-style-definition: YES 00:04:56.981 Compiler for C supports arguments -Wpointer-arith: YES 00:04:56.981 Compiler for C supports arguments -Wsign-compare: YES 00:04:56.981 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:56.981 Compiler for C supports arguments -Wundef: YES 00:04:56.981 Compiler for C supports arguments -Wwrite-strings: YES 00:04:56.981 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:56.981 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:56.981 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:56.981 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:56.981 Program objdump found: YES (/usr/bin/objdump) 00:04:56.981 Compiler for C supports arguments -mavx512f: YES 00:04:56.981 Checking if "AVX512 checking" compiles: YES 00:04:56.981 Fetching value of define "__SSE4_2__" : 1 00:04:56.981 Fetching value of define "__AES__" : 1 00:04:56.981 Fetching value of define "__AVX__" : 1 00:04:56.981 Fetching value of define "__AVX2__" : 1 00:04:56.981 Fetching value of define "__AVX512BW__" : 1 00:04:56.981 Fetching value of define "__AVX512CD__" : 1 00:04:56.981 Fetching value of define "__AVX512DQ__" : 1 00:04:56.981 Fetching value of define "__AVX512F__" : 1 00:04:56.981 Fetching value of define "__AVX512VL__" : 1 00:04:56.981 Fetching value of define "__PCLMUL__" : 1 00:04:56.981 Fetching value of define "__RDRND__" : 1 00:04:56.981 Fetching value of define "__RDSEED__" : 1 00:04:56.981 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:56.981 Fetching value of define "__znver1__" : (undefined) 00:04:56.981 Fetching value of define "__znver2__" : (undefined) 00:04:56.981 Fetching value of define "__znver3__" : (undefined) 00:04:56.981 Fetching value of define "__znver4__" : (undefined) 00:04:56.981 Library asan found: YES 00:04:56.981 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:56.981 Message: lib/log: Defining dependency "log" 00:04:56.981 Message: lib/kvargs: Defining dependency "kvargs" 00:04:56.981 Message: lib/telemetry: Defining dependency "telemetry" 00:04:56.981 Library rt found: YES 00:04:56.981 Checking for function "getentropy" : NO 00:04:56.981 Message: lib/eal: Defining dependency "eal" 00:04:56.981 Message: lib/ring: Defining dependency "ring" 00:04:56.981 Message: lib/rcu: Defining dependency "rcu" 00:04:56.981 Message: lib/mempool: Defining dependency "mempool" 00:04:56.981 Message: lib/mbuf: Defining dependency "mbuf" 00:04:56.981 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:56.981 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:56.981 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:56.981 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:56.981 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:56.981 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:56.981 Compiler for C supports arguments -mpclmul: YES 00:04:56.982 Compiler for C supports arguments -maes: YES 00:04:56.982 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:56.982 Compiler for C supports arguments -mavx512bw: YES 00:04:56.982 Compiler for C supports arguments -mavx512dq: YES 00:04:56.982 Compiler for C supports arguments -mavx512vl: YES 00:04:56.982 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:56.982 Compiler for C supports arguments -mavx2: YES 00:04:56.982 Compiler for C supports arguments -mavx: YES 00:04:56.982 Message: lib/net: Defining dependency "net" 00:04:56.982 Message: lib/meter: Defining dependency "meter" 00:04:56.982 Message: lib/ethdev: Defining dependency "ethdev" 00:04:56.982 Message: lib/pci: Defining dependency "pci" 00:04:56.982 Message: lib/cmdline: Defining dependency "cmdline" 00:04:56.982 Message: lib/hash: Defining dependency "hash" 00:04:56.982 Message: lib/timer: Defining dependency "timer" 00:04:56.982 Message: lib/compressdev: Defining dependency "compressdev" 00:04:56.982 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:56.982 Message: lib/dmadev: Defining dependency "dmadev" 00:04:56.982 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:56.982 Message: lib/power: Defining dependency "power" 00:04:56.982 Message: lib/reorder: Defining dependency "reorder" 00:04:56.982 Message: lib/security: Defining dependency "security" 00:04:56.982 Has header "linux/userfaultfd.h" : YES 00:04:56.982 Has header "linux/vduse.h" : YES 00:04:56.982 Message: lib/vhost: Defining dependency "vhost" 00:04:56.982 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:56.982 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:56.982 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:56.982 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:56.982 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:56.982 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:56.982 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:56.982 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:56.982 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:56.982 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:56.982 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:56.982 Configuring doxy-api-html.conf using configuration 00:04:56.982 Configuring doxy-api-man.conf using configuration 00:04:56.982 Program mandb found: YES (/usr/bin/mandb) 00:04:56.982 Program sphinx-build found: NO 00:04:56.982 Configuring rte_build_config.h using configuration 00:04:56.982 Message: 00:04:56.982 ================= 00:04:56.982 Applications Enabled 00:04:56.982 ================= 00:04:56.982 00:04:56.982 apps: 00:04:56.982 00:04:56.982 00:04:56.982 Message: 00:04:56.982 ================= 00:04:56.982 Libraries Enabled 00:04:56.982 ================= 00:04:56.982 00:04:56.982 libs: 00:04:56.982 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:56.982 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:56.982 cryptodev, dmadev, power, reorder, security, vhost, 00:04:56.982 00:04:56.982 Message: 00:04:56.982 =============== 00:04:56.982 Drivers Enabled 00:04:56.982 =============== 00:04:56.982 00:04:56.982 common: 00:04:56.982 00:04:56.982 bus: 00:04:56.982 pci, vdev, 00:04:56.982 mempool: 00:04:56.982 ring, 00:04:56.982 dma: 00:04:56.982 00:04:56.982 net: 00:04:56.982 00:04:56.982 crypto: 00:04:56.982 00:04:56.982 compress: 00:04:56.982 00:04:56.982 vdpa: 00:04:56.982 00:04:56.982 00:04:56.982 Message: 00:04:56.982 ================= 00:04:56.982 Content Skipped 00:04:56.982 ================= 00:04:56.982 00:04:56.982 apps: 00:04:56.982 dumpcap: explicitly disabled via build config 00:04:56.982 graph: explicitly disabled via build config 00:04:56.982 pdump: explicitly disabled via build config 00:04:56.982 proc-info: explicitly disabled via build config 00:04:56.982 test-acl: explicitly disabled via build config 00:04:56.982 test-bbdev: explicitly disabled via build config 00:04:56.982 test-cmdline: explicitly disabled via build config 00:04:56.982 test-compress-perf: explicitly disabled via build config 00:04:56.982 test-crypto-perf: explicitly disabled via build config 00:04:56.982 test-dma-perf: explicitly disabled via build config 00:04:56.982 test-eventdev: explicitly disabled via build config 00:04:56.982 test-fib: explicitly disabled via build config 00:04:56.982 test-flow-perf: explicitly disabled via build config 00:04:56.982 test-gpudev: explicitly disabled via build config 00:04:56.982 test-mldev: explicitly disabled via build config 00:04:56.982 test-pipeline: explicitly disabled via build config 00:04:56.982 test-pmd: explicitly disabled via build config 00:04:56.982 test-regex: explicitly disabled via build config 00:04:56.982 test-sad: explicitly disabled via build config 00:04:56.982 test-security-perf: explicitly disabled via build config 00:04:56.982 00:04:56.982 libs: 00:04:56.982 argparse: explicitly disabled via build config 00:04:56.982 metrics: explicitly disabled via build config 00:04:56.982 acl: explicitly disabled via build config 00:04:56.982 bbdev: explicitly disabled via build config 00:04:56.982 bitratestats: explicitly disabled via build config 00:04:56.982 bpf: explicitly disabled via build config 00:04:56.982 cfgfile: explicitly disabled via build config 00:04:56.982 distributor: explicitly disabled via build config 00:04:56.982 efd: explicitly disabled via build config 00:04:56.982 eventdev: explicitly disabled via build config 00:04:56.982 dispatcher: explicitly disabled via build config 00:04:56.982 gpudev: explicitly disabled via build config 00:04:56.982 gro: explicitly disabled via build config 00:04:56.982 gso: explicitly disabled via build config 00:04:56.982 ip_frag: explicitly disabled via build config 00:04:56.982 jobstats: explicitly disabled via build config 00:04:56.982 latencystats: explicitly disabled via build config 00:04:56.982 lpm: explicitly disabled via build config 00:04:56.982 member: explicitly disabled via build config 00:04:56.982 pcapng: explicitly disabled via build config 00:04:56.982 rawdev: explicitly disabled via build config 00:04:56.982 regexdev: explicitly disabled via build config 00:04:56.982 mldev: explicitly disabled via build config 00:04:56.982 rib: explicitly disabled via build config 00:04:56.982 sched: explicitly disabled via build config 00:04:56.982 stack: explicitly disabled via build config 00:04:56.982 ipsec: explicitly disabled via build config 00:04:56.982 pdcp: explicitly disabled via build config 00:04:56.982 fib: explicitly disabled via build config 00:04:56.982 port: explicitly disabled via build config 00:04:56.982 pdump: explicitly disabled via build config 00:04:56.982 table: explicitly disabled via build config 00:04:56.982 pipeline: explicitly disabled via build config 00:04:56.982 graph: explicitly disabled via build config 00:04:56.982 node: explicitly disabled via build config 00:04:56.982 00:04:56.982 drivers: 00:04:56.982 common/cpt: not in enabled drivers build config 00:04:56.982 common/dpaax: not in enabled drivers build config 00:04:56.982 common/iavf: not in enabled drivers build config 00:04:56.982 common/idpf: not in enabled drivers build config 00:04:56.982 common/ionic: not in enabled drivers build config 00:04:56.982 common/mvep: not in enabled drivers build config 00:04:56.982 common/octeontx: not in enabled drivers build config 00:04:56.982 bus/auxiliary: not in enabled drivers build config 00:04:56.982 bus/cdx: not in enabled drivers build config 00:04:56.982 bus/dpaa: not in enabled drivers build config 00:04:56.982 bus/fslmc: not in enabled drivers build config 00:04:56.982 bus/ifpga: not in enabled drivers build config 00:04:56.982 bus/platform: not in enabled drivers build config 00:04:56.982 bus/uacce: not in enabled drivers build config 00:04:56.982 bus/vmbus: not in enabled drivers build config 00:04:56.982 common/cnxk: not in enabled drivers build config 00:04:56.982 common/mlx5: not in enabled drivers build config 00:04:56.982 common/nfp: not in enabled drivers build config 00:04:56.982 common/nitrox: not in enabled drivers build config 00:04:56.982 common/qat: not in enabled drivers build config 00:04:56.982 common/sfc_efx: not in enabled drivers build config 00:04:56.982 mempool/bucket: not in enabled drivers build config 00:04:56.982 mempool/cnxk: not in enabled drivers build config 00:04:56.982 mempool/dpaa: not in enabled drivers build config 00:04:56.982 mempool/dpaa2: not in enabled drivers build config 00:04:56.982 mempool/octeontx: not in enabled drivers build config 00:04:56.982 mempool/stack: not in enabled drivers build config 00:04:56.982 dma/cnxk: not in enabled drivers build config 00:04:56.982 dma/dpaa: not in enabled drivers build config 00:04:56.982 dma/dpaa2: not in enabled drivers build config 00:04:56.982 dma/hisilicon: not in enabled drivers build config 00:04:56.982 dma/idxd: not in enabled drivers build config 00:04:56.982 dma/ioat: not in enabled drivers build config 00:04:56.982 dma/skeleton: not in enabled drivers build config 00:04:56.982 net/af_packet: not in enabled drivers build config 00:04:56.982 net/af_xdp: not in enabled drivers build config 00:04:56.982 net/ark: not in enabled drivers build config 00:04:56.982 net/atlantic: not in enabled drivers build config 00:04:56.982 net/avp: not in enabled drivers build config 00:04:56.982 net/axgbe: not in enabled drivers build config 00:04:56.982 net/bnx2x: not in enabled drivers build config 00:04:56.982 net/bnxt: not in enabled drivers build config 00:04:56.982 net/bonding: not in enabled drivers build config 00:04:56.982 net/cnxk: not in enabled drivers build config 00:04:56.982 net/cpfl: not in enabled drivers build config 00:04:56.982 net/cxgbe: not in enabled drivers build config 00:04:56.982 net/dpaa: not in enabled drivers build config 00:04:56.982 net/dpaa2: not in enabled drivers build config 00:04:56.982 net/e1000: not in enabled drivers build config 00:04:56.982 net/ena: not in enabled drivers build config 00:04:56.982 net/enetc: not in enabled drivers build config 00:04:56.982 net/enetfec: not in enabled drivers build config 00:04:56.982 net/enic: not in enabled drivers build config 00:04:56.982 net/failsafe: not in enabled drivers build config 00:04:56.982 net/fm10k: not in enabled drivers build config 00:04:56.982 net/gve: not in enabled drivers build config 00:04:56.982 net/hinic: not in enabled drivers build config 00:04:56.983 net/hns3: not in enabled drivers build config 00:04:56.983 net/i40e: not in enabled drivers build config 00:04:56.983 net/iavf: not in enabled drivers build config 00:04:56.983 net/ice: not in enabled drivers build config 00:04:56.983 net/idpf: not in enabled drivers build config 00:04:56.983 net/igc: not in enabled drivers build config 00:04:56.983 net/ionic: not in enabled drivers build config 00:04:56.983 net/ipn3ke: not in enabled drivers build config 00:04:56.983 net/ixgbe: not in enabled drivers build config 00:04:56.983 net/mana: not in enabled drivers build config 00:04:56.983 net/memif: not in enabled drivers build config 00:04:56.983 net/mlx4: not in enabled drivers build config 00:04:56.983 net/mlx5: not in enabled drivers build config 00:04:56.983 net/mvneta: not in enabled drivers build config 00:04:56.983 net/mvpp2: not in enabled drivers build config 00:04:56.983 net/netvsc: not in enabled drivers build config 00:04:56.983 net/nfb: not in enabled drivers build config 00:04:56.983 net/nfp: not in enabled drivers build config 00:04:56.983 net/ngbe: not in enabled drivers build config 00:04:56.983 net/null: not in enabled drivers build config 00:04:56.983 net/octeontx: not in enabled drivers build config 00:04:56.983 net/octeon_ep: not in enabled drivers build config 00:04:56.983 net/pcap: not in enabled drivers build config 00:04:56.983 net/pfe: not in enabled drivers build config 00:04:56.983 net/qede: not in enabled drivers build config 00:04:56.983 net/ring: not in enabled drivers build config 00:04:56.983 net/sfc: not in enabled drivers build config 00:04:56.983 net/softnic: not in enabled drivers build config 00:04:56.983 net/tap: not in enabled drivers build config 00:04:56.983 net/thunderx: not in enabled drivers build config 00:04:56.983 net/txgbe: not in enabled drivers build config 00:04:56.983 net/vdev_netvsc: not in enabled drivers build config 00:04:56.983 net/vhost: not in enabled drivers build config 00:04:56.983 net/virtio: not in enabled drivers build config 00:04:56.983 net/vmxnet3: not in enabled drivers build config 00:04:56.983 raw/*: missing internal dependency, "rawdev" 00:04:56.983 crypto/armv8: not in enabled drivers build config 00:04:56.983 crypto/bcmfs: not in enabled drivers build config 00:04:56.983 crypto/caam_jr: not in enabled drivers build config 00:04:56.983 crypto/ccp: not in enabled drivers build config 00:04:56.983 crypto/cnxk: not in enabled drivers build config 00:04:56.983 crypto/dpaa_sec: not in enabled drivers build config 00:04:56.983 crypto/dpaa2_sec: not in enabled drivers build config 00:04:56.983 crypto/ipsec_mb: not in enabled drivers build config 00:04:56.983 crypto/mlx5: not in enabled drivers build config 00:04:56.983 crypto/mvsam: not in enabled drivers build config 00:04:56.983 crypto/nitrox: not in enabled drivers build config 00:04:56.983 crypto/null: not in enabled drivers build config 00:04:56.983 crypto/octeontx: not in enabled drivers build config 00:04:56.983 crypto/openssl: not in enabled drivers build config 00:04:56.983 crypto/scheduler: not in enabled drivers build config 00:04:56.983 crypto/uadk: not in enabled drivers build config 00:04:56.983 crypto/virtio: not in enabled drivers build config 00:04:56.983 compress/isal: not in enabled drivers build config 00:04:56.983 compress/mlx5: not in enabled drivers build config 00:04:56.983 compress/nitrox: not in enabled drivers build config 00:04:56.983 compress/octeontx: not in enabled drivers build config 00:04:56.983 compress/zlib: not in enabled drivers build config 00:04:56.983 regex/*: missing internal dependency, "regexdev" 00:04:56.983 ml/*: missing internal dependency, "mldev" 00:04:56.983 vdpa/ifc: not in enabled drivers build config 00:04:56.983 vdpa/mlx5: not in enabled drivers build config 00:04:56.983 vdpa/nfp: not in enabled drivers build config 00:04:56.983 vdpa/sfc: not in enabled drivers build config 00:04:56.983 event/*: missing internal dependency, "eventdev" 00:04:56.983 baseband/*: missing internal dependency, "bbdev" 00:04:56.983 gpu/*: missing internal dependency, "gpudev" 00:04:56.983 00:04:56.983 00:04:57.550 Build targets in project: 85 00:04:57.550 00:04:57.550 DPDK 24.03.0 00:04:57.550 00:04:57.550 User defined options 00:04:57.550 buildtype : debug 00:04:57.550 default_library : shared 00:04:57.550 libdir : lib 00:04:57.550 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:57.550 b_sanitize : address 00:04:57.550 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:57.550 c_link_args : 00:04:57.550 cpu_instruction_set: native 00:04:57.550 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:57.550 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:57.550 enable_docs : false 00:04:57.550 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:57.550 enable_kmods : false 00:04:57.550 max_lcores : 128 00:04:57.550 tests : false 00:04:57.550 00:04:57.550 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:58.115 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:58.116 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:58.116 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:58.116 [3/268] Linking static target lib/librte_kvargs.a 00:04:58.374 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:58.374 [5/268] Linking static target lib/librte_log.a 00:04:58.374 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:58.940 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:58.940 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:58.940 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:58.940 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:58.940 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:58.940 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:58.940 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:58.940 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:58.940 [15/268] Linking static target lib/librte_telemetry.a 00:04:58.940 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:59.198 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:59.198 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:59.457 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.457 [20/268] Linking target lib/librte_log.so.24.1 00:04:59.457 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:59.457 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:59.457 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:59.714 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:59.714 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:59.714 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:59.715 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:59.715 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:59.973 [29/268] Linking target lib/librte_kvargs.so.24.1 00:04:59.973 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:59.973 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.973 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:59.973 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:00.231 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:00.231 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:00.231 [36/268] Linking target lib/librte_telemetry.so.24.1 00:05:00.231 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:00.231 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:00.556 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:00.556 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:00.556 [41/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:00.556 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:00.556 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:00.556 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:00.556 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:00.832 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:00.832 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:01.091 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:01.091 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:01.091 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:01.349 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:01.349 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:01.349 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:01.607 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:01.607 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:01.607 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:01.607 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:01.865 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:01.865 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:01.865 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:01.865 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:01.865 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:01.865 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:01.865 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:02.124 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:02.124 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:02.382 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:02.382 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:02.382 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:02.641 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:02.641 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:02.641 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:02.641 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:02.641 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:02.899 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:02.899 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:02.899 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:02.899 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:02.899 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:03.158 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:03.158 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:03.158 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:03.417 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:03.417 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:03.673 [85/268] Linking static target lib/librte_eal.a 00:05:03.673 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:03.673 [87/268] Linking static target lib/librte_ring.a 00:05:03.673 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:03.673 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:03.673 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:03.931 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:03.931 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:03.931 [93/268] Linking static target lib/librte_mempool.a 00:05:03.931 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:04.190 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.190 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:04.190 [97/268] Linking static target lib/librte_rcu.a 00:05:04.447 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:04.448 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:04.448 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:04.448 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:04.448 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:04.705 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:04.705 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:04.705 [105/268] Linking static target lib/librte_mbuf.a 00:05:04.705 [106/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.705 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:04.963 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:04.963 [109/268] Linking static target lib/librte_meter.a 00:05:04.963 [110/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:04.963 [111/268] Linking static target lib/librte_net.a 00:05:05.220 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:05.220 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.477 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:05.477 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.477 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.477 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:05.477 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:05.735 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:05.994 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:05.994 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.994 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:06.561 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:06.561 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:06.561 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:06.561 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:06.561 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:06.561 [128/268] Linking static target lib/librte_pci.a 00:05:06.818 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:06.818 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:06.818 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:06.818 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:06.818 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:06.818 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:07.077 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:07.077 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:07.077 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:07.077 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:07.077 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:07.077 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:07.077 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:07.077 [142/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.335 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:07.335 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:07.335 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:07.335 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:07.335 [147/268] Linking static target lib/librte_cmdline.a 00:05:07.595 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:07.862 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:07.862 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:07.862 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:07.862 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:07.862 [153/268] Linking static target lib/librte_timer.a 00:05:08.119 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:08.377 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:08.377 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:08.635 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:08.635 [158/268] Linking static target lib/librte_ethdev.a 00:05:08.635 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:08.635 [160/268] Linking static target lib/librte_compressdev.a 00:05:08.635 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:08.635 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:08.635 [163/268] Linking static target lib/librte_hash.a 00:05:08.635 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.894 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:09.151 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:09.151 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:09.151 [168/268] Linking static target lib/librte_dmadev.a 00:05:09.152 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:09.152 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:09.409 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:09.409 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:09.409 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.668 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:09.926 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:09.926 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:09.926 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:09.926 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:10.185 [179/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.185 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:10.185 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.442 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:10.442 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:10.442 [184/268] Linking static target lib/librte_cryptodev.a 00:05:10.701 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:10.701 [186/268] Linking static target lib/librte_power.a 00:05:10.959 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:10.959 [188/268] Linking static target lib/librte_reorder.a 00:05:10.959 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:11.216 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:11.216 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:11.780 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:11.780 [193/268] Linking static target lib/librte_security.a 00:05:11.780 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.095 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:12.352 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:12.352 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.352 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:12.610 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:12.610 [200/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.866 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:13.124 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:13.381 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:13.381 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:13.381 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:13.639 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:13.639 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:13.639 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:13.896 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:13.896 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:13.896 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.155 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:14.155 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:14.155 [214/268] Linking static target drivers/librte_bus_vdev.a 00:05:14.155 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:14.155 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:14.155 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:14.155 [218/268] Linking static target drivers/librte_bus_pci.a 00:05:14.155 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:14.155 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:14.414 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:14.671 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:14.672 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:14.672 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:14.672 [225/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.672 [226/268] Linking static target drivers/librte_mempool_ring.a 00:05:15.237 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.138 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:18.073 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.073 [230/268] Linking target lib/librte_eal.so.24.1 00:05:18.073 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:18.332 [232/268] Linking target lib/librte_dmadev.so.24.1 00:05:18.332 [233/268] Linking target lib/librte_pci.so.24.1 00:05:18.332 [234/268] Linking target lib/librte_meter.so.24.1 00:05:18.332 [235/268] Linking target lib/librte_timer.so.24.1 00:05:18.332 [236/268] Linking target lib/librte_ring.so.24.1 00:05:18.332 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:18.590 [238/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:18.590 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:18.590 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:18.590 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:18.590 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:18.590 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:18.590 [244/268] Linking target lib/librte_mempool.so.24.1 00:05:18.590 [245/268] Linking target lib/librte_rcu.so.24.1 00:05:18.590 [246/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.848 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:18.848 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:18.848 [249/268] Linking target lib/librte_mbuf.so.24.1 00:05:18.848 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:19.107 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:19.107 [252/268] Linking target lib/librte_cryptodev.so.24.1 00:05:19.107 [253/268] Linking target lib/librte_compressdev.so.24.1 00:05:19.107 [254/268] Linking target lib/librte_net.so.24.1 00:05:19.107 [255/268] Linking target lib/librte_reorder.so.24.1 00:05:19.366 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:19.366 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:19.366 [258/268] Linking target lib/librte_cmdline.so.24.1 00:05:19.366 [259/268] Linking target lib/librte_hash.so.24.1 00:05:19.366 [260/268] Linking target lib/librte_security.so.24.1 00:05:19.624 [261/268] Linking target lib/librte_ethdev.so.24.1 00:05:19.624 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:19.882 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:19.882 [264/268] Linking target lib/librte_power.so.24.1 00:05:22.413 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:22.413 [266/268] Linking static target lib/librte_vhost.a 00:05:23.789 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:23.789 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:23.789 INFO: autodetecting backend as ninja 00:05:23.789 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:45.718 CC lib/ut_mock/mock.o 00:05:45.718 CC lib/ut/ut.o 00:05:45.718 CC lib/log/log_flags.o 00:05:45.718 CC lib/log/log.o 00:05:45.718 CC lib/log/log_deprecated.o 00:05:45.976 LIB libspdk_ut.a 00:05:45.976 LIB libspdk_ut_mock.a 00:05:45.976 SO libspdk_ut.so.2.0 00:05:45.976 SO libspdk_ut_mock.so.6.0 00:05:45.976 LIB libspdk_log.a 00:05:45.976 SYMLINK libspdk_ut.so 00:05:46.234 SYMLINK libspdk_ut_mock.so 00:05:46.234 SO libspdk_log.so.7.1 00:05:46.234 SYMLINK libspdk_log.so 00:05:46.492 CC lib/dma/dma.o 00:05:46.492 CC lib/util/base64.o 00:05:46.492 CC lib/util/bit_array.o 00:05:46.492 CC lib/util/cpuset.o 00:05:46.492 CC lib/util/crc16.o 00:05:46.492 CC lib/util/crc32c.o 00:05:46.492 CC lib/util/crc32.o 00:05:46.492 CXX lib/trace_parser/trace.o 00:05:46.492 CC lib/ioat/ioat.o 00:05:46.749 CC lib/vfio_user/host/vfio_user_pci.o 00:05:46.749 CC lib/util/crc32_ieee.o 00:05:46.749 CC lib/util/crc64.o 00:05:46.749 CC lib/util/dif.o 00:05:46.749 CC lib/util/fd.o 00:05:46.749 CC lib/vfio_user/host/vfio_user.o 00:05:46.749 CC lib/util/fd_group.o 00:05:47.008 CC lib/util/file.o 00:05:47.008 LIB libspdk_dma.a 00:05:47.008 CC lib/util/hexlify.o 00:05:47.008 SO libspdk_dma.so.5.0 00:05:47.008 CC lib/util/iov.o 00:05:47.008 CC lib/util/math.o 00:05:47.008 SYMLINK libspdk_dma.so 00:05:47.008 LIB libspdk_ioat.a 00:05:47.266 CC lib/util/net.o 00:05:47.266 LIB libspdk_vfio_user.a 00:05:47.266 SO libspdk_ioat.so.7.0 00:05:47.266 SO libspdk_vfio_user.so.5.0 00:05:47.266 CC lib/util/pipe.o 00:05:47.266 CC lib/util/strerror_tls.o 00:05:47.266 CC lib/util/string.o 00:05:47.266 SYMLINK libspdk_ioat.so 00:05:47.266 CC lib/util/uuid.o 00:05:47.266 SYMLINK libspdk_vfio_user.so 00:05:47.266 CC lib/util/xor.o 00:05:47.266 CC lib/util/zipf.o 00:05:47.266 CC lib/util/md5.o 00:05:47.525 LIB libspdk_util.a 00:05:47.782 SO libspdk_util.so.10.1 00:05:48.041 LIB libspdk_trace_parser.a 00:05:48.041 SYMLINK libspdk_util.so 00:05:48.041 SO libspdk_trace_parser.so.6.0 00:05:48.299 SYMLINK libspdk_trace_parser.so 00:05:48.299 CC lib/env_dpdk/env.o 00:05:48.299 CC lib/env_dpdk/memory.o 00:05:48.299 CC lib/env_dpdk/init.o 00:05:48.299 CC lib/env_dpdk/pci.o 00:05:48.299 CC lib/env_dpdk/threads.o 00:05:48.299 CC lib/vmd/vmd.o 00:05:48.299 CC lib/conf/conf.o 00:05:48.299 CC lib/rdma_utils/rdma_utils.o 00:05:48.299 CC lib/json/json_parse.o 00:05:48.299 CC lib/idxd/idxd.o 00:05:48.299 CC lib/env_dpdk/pci_ioat.o 00:05:48.558 LIB libspdk_conf.a 00:05:48.558 SO libspdk_conf.so.6.0 00:05:48.558 CC lib/env_dpdk/pci_virtio.o 00:05:48.558 SYMLINK libspdk_conf.so 00:05:48.558 CC lib/json/json_util.o 00:05:48.558 CC lib/json/json_write.o 00:05:48.558 LIB libspdk_rdma_utils.a 00:05:48.815 CC lib/env_dpdk/pci_vmd.o 00:05:48.815 SO libspdk_rdma_utils.so.1.0 00:05:48.815 CC lib/idxd/idxd_user.o 00:05:48.815 CC lib/idxd/idxd_kernel.o 00:05:48.815 SYMLINK libspdk_rdma_utils.so 00:05:48.815 CC lib/env_dpdk/pci_idxd.o 00:05:48.815 CC lib/env_dpdk/pci_event.o 00:05:49.073 LIB libspdk_json.a 00:05:49.073 SO libspdk_json.so.6.0 00:05:49.073 CC lib/vmd/led.o 00:05:49.073 CC lib/env_dpdk/sigbus_handler.o 00:05:49.073 CC lib/env_dpdk/pci_dpdk.o 00:05:49.073 CC lib/rdma_provider/common.o 00:05:49.073 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:49.073 SYMLINK libspdk_json.so 00:05:49.073 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:49.331 LIB libspdk_vmd.a 00:05:49.331 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:49.331 SO libspdk_vmd.so.6.0 00:05:49.331 SYMLINK libspdk_vmd.so 00:05:49.591 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:49.591 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:49.591 CC lib/jsonrpc/jsonrpc_server.o 00:05:49.591 CC lib/jsonrpc/jsonrpc_client.o 00:05:49.591 LIB libspdk_idxd.a 00:05:49.591 LIB libspdk_rdma_provider.a 00:05:49.591 SO libspdk_idxd.so.12.1 00:05:49.591 SO libspdk_rdma_provider.so.7.0 00:05:49.591 SYMLINK libspdk_idxd.so 00:05:49.591 SYMLINK libspdk_rdma_provider.so 00:05:49.910 LIB libspdk_jsonrpc.a 00:05:49.910 SO libspdk_jsonrpc.so.6.0 00:05:50.184 SYMLINK libspdk_jsonrpc.so 00:05:50.444 CC lib/rpc/rpc.o 00:05:50.444 LIB libspdk_env_dpdk.a 00:05:50.444 SO libspdk_env_dpdk.so.15.1 00:05:50.703 LIB libspdk_rpc.a 00:05:50.703 SO libspdk_rpc.so.6.0 00:05:50.703 SYMLINK libspdk_env_dpdk.so 00:05:50.703 SYMLINK libspdk_rpc.so 00:05:50.961 CC lib/notify/notify.o 00:05:50.961 CC lib/notify/notify_rpc.o 00:05:50.961 CC lib/keyring/keyring_rpc.o 00:05:50.961 CC lib/keyring/keyring.o 00:05:50.961 CC lib/trace/trace.o 00:05:50.961 CC lib/trace/trace_flags.o 00:05:50.961 CC lib/trace/trace_rpc.o 00:05:51.220 LIB libspdk_notify.a 00:05:51.220 SO libspdk_notify.so.6.0 00:05:51.220 LIB libspdk_keyring.a 00:05:51.478 LIB libspdk_trace.a 00:05:51.478 SO libspdk_keyring.so.2.0 00:05:51.478 SYMLINK libspdk_notify.so 00:05:51.478 SO libspdk_trace.so.11.0 00:05:51.478 SYMLINK libspdk_keyring.so 00:05:51.478 SYMLINK libspdk_trace.so 00:05:51.737 CC lib/thread/thread.o 00:05:51.737 CC lib/thread/iobuf.o 00:05:51.737 CC lib/sock/sock.o 00:05:51.737 CC lib/sock/sock_rpc.o 00:05:52.672 LIB libspdk_sock.a 00:05:52.672 SO libspdk_sock.so.10.0 00:05:52.672 SYMLINK libspdk_sock.so 00:05:52.930 CC lib/nvme/nvme_ctrlr.o 00:05:52.930 CC lib/nvme/nvme_fabric.o 00:05:52.930 CC lib/nvme/nvme_ns_cmd.o 00:05:52.930 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:52.930 CC lib/nvme/nvme_pcie_common.o 00:05:52.930 CC lib/nvme/nvme_ns.o 00:05:52.930 CC lib/nvme/nvme_qpair.o 00:05:52.930 CC lib/nvme/nvme_pcie.o 00:05:52.930 CC lib/nvme/nvme.o 00:05:53.917 CC lib/nvme/nvme_quirks.o 00:05:53.917 CC lib/nvme/nvme_transport.o 00:05:53.917 CC lib/nvme/nvme_discovery.o 00:05:53.917 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:54.175 LIB libspdk_thread.a 00:05:54.175 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:54.175 SO libspdk_thread.so.11.0 00:05:54.433 SYMLINK libspdk_thread.so 00:05:54.433 CC lib/nvme/nvme_tcp.o 00:05:54.433 CC lib/nvme/nvme_opal.o 00:05:54.690 CC lib/nvme/nvme_io_msg.o 00:05:54.946 CC lib/accel/accel.o 00:05:54.946 CC lib/blob/blobstore.o 00:05:54.946 CC lib/blob/request.o 00:05:55.204 CC lib/init/json_config.o 00:05:55.204 CC lib/virtio/virtio.o 00:05:55.204 CC lib/fsdev/fsdev.o 00:05:55.204 CC lib/fsdev/fsdev_io.o 00:05:55.461 CC lib/fsdev/fsdev_rpc.o 00:05:55.718 CC lib/virtio/virtio_vhost_user.o 00:05:55.718 CC lib/init/subsystem.o 00:05:55.718 CC lib/virtio/virtio_vfio_user.o 00:05:55.976 CC lib/virtio/virtio_pci.o 00:05:55.976 CC lib/init/subsystem_rpc.o 00:05:55.976 CC lib/blob/zeroes.o 00:05:55.976 CC lib/blob/blob_bs_dev.o 00:05:55.976 CC lib/accel/accel_rpc.o 00:05:56.234 CC lib/init/rpc.o 00:05:56.234 CC lib/accel/accel_sw.o 00:05:56.234 LIB libspdk_fsdev.a 00:05:56.234 SO libspdk_fsdev.so.2.0 00:05:56.234 CC lib/nvme/nvme_poll_group.o 00:05:56.234 LIB libspdk_init.a 00:05:56.234 CC lib/nvme/nvme_zns.o 00:05:56.492 CC lib/nvme/nvme_stubs.o 00:05:56.492 SYMLINK libspdk_fsdev.so 00:05:56.492 LIB libspdk_virtio.a 00:05:56.492 SO libspdk_init.so.6.0 00:05:56.492 SO libspdk_virtio.so.7.0 00:05:56.492 SYMLINK libspdk_init.so 00:05:56.492 CC lib/nvme/nvme_auth.o 00:05:56.492 SYMLINK libspdk_virtio.so 00:05:56.754 CC lib/nvme/nvme_cuse.o 00:05:56.754 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:56.754 CC lib/event/app.o 00:05:57.013 CC lib/nvme/nvme_rdma.o 00:05:57.013 CC lib/event/reactor.o 00:05:57.013 CC lib/event/log_rpc.o 00:05:57.013 LIB libspdk_accel.a 00:05:57.272 CC lib/event/app_rpc.o 00:05:57.272 SO libspdk_accel.so.16.0 00:05:57.272 CC lib/event/scheduler_static.o 00:05:57.272 SYMLINK libspdk_accel.so 00:05:57.530 LIB libspdk_fuse_dispatcher.a 00:05:57.530 CC lib/bdev/bdev_rpc.o 00:05:57.530 CC lib/bdev/bdev_zone.o 00:05:57.530 CC lib/bdev/part.o 00:05:57.530 CC lib/bdev/bdev.o 00:05:57.530 SO libspdk_fuse_dispatcher.so.1.0 00:05:57.530 LIB libspdk_event.a 00:05:57.789 SO libspdk_event.so.14.0 00:05:57.789 SYMLINK libspdk_fuse_dispatcher.so 00:05:57.789 CC lib/bdev/scsi_nvme.o 00:05:57.789 SYMLINK libspdk_event.so 00:05:58.724 LIB libspdk_nvme.a 00:05:58.983 SO libspdk_nvme.so.15.0 00:05:59.551 SYMLINK libspdk_nvme.so 00:05:59.551 LIB libspdk_blob.a 00:05:59.912 SO libspdk_blob.so.11.0 00:05:59.912 SYMLINK libspdk_blob.so 00:06:00.171 CC lib/blobfs/blobfs.o 00:06:00.171 CC lib/blobfs/tree.o 00:06:00.171 CC lib/lvol/lvol.o 00:06:01.545 LIB libspdk_blobfs.a 00:06:01.545 SO libspdk_blobfs.so.10.0 00:06:01.545 LIB libspdk_bdev.a 00:06:01.545 SO libspdk_bdev.so.17.0 00:06:01.545 SYMLINK libspdk_blobfs.so 00:06:01.545 LIB libspdk_lvol.a 00:06:01.804 SO libspdk_lvol.so.10.0 00:06:01.804 SYMLINK libspdk_bdev.so 00:06:01.804 SYMLINK libspdk_lvol.so 00:06:02.062 CC lib/scsi/dev.o 00:06:02.062 CC lib/scsi/lun.o 00:06:02.062 CC lib/scsi/port.o 00:06:02.062 CC lib/scsi/scsi.o 00:06:02.062 CC lib/ublk/ublk.o 00:06:02.062 CC lib/scsi/scsi_pr.o 00:06:02.062 CC lib/scsi/scsi_bdev.o 00:06:02.062 CC lib/nvmf/ctrlr.o 00:06:02.062 CC lib/ftl/ftl_core.o 00:06:02.062 CC lib/nbd/nbd.o 00:06:02.062 CC lib/scsi/scsi_rpc.o 00:06:02.321 CC lib/ftl/ftl_init.o 00:06:02.321 CC lib/ftl/ftl_layout.o 00:06:02.321 CC lib/scsi/task.o 00:06:02.321 CC lib/ftl/ftl_debug.o 00:06:02.580 CC lib/ftl/ftl_io.o 00:06:02.581 CC lib/ftl/ftl_sb.o 00:06:02.581 CC lib/ftl/ftl_l2p.o 00:06:02.581 CC lib/ftl/ftl_l2p_flat.o 00:06:02.839 LIB libspdk_scsi.a 00:06:02.839 CC lib/ftl/ftl_nv_cache.o 00:06:02.839 CC lib/ftl/ftl_band.o 00:06:02.839 SO libspdk_scsi.so.9.0 00:06:02.839 CC lib/ftl/ftl_band_ops.o 00:06:02.839 CC lib/ftl/ftl_writer.o 00:06:02.839 CC lib/nbd/nbd_rpc.o 00:06:02.839 SYMLINK libspdk_scsi.so 00:06:02.839 CC lib/ftl/ftl_rq.o 00:06:03.098 CC lib/nvmf/ctrlr_discovery.o 00:06:03.098 CC lib/nvmf/ctrlr_bdev.o 00:06:03.098 CC lib/ublk/ublk_rpc.o 00:06:03.098 LIB libspdk_nbd.a 00:06:03.098 SO libspdk_nbd.so.7.0 00:06:03.098 CC lib/ftl/ftl_reloc.o 00:06:03.357 SYMLINK libspdk_nbd.so 00:06:03.357 CC lib/ftl/ftl_l2p_cache.o 00:06:03.357 CC lib/nvmf/subsystem.o 00:06:03.357 LIB libspdk_ublk.a 00:06:03.357 CC lib/ftl/ftl_p2l.o 00:06:03.357 CC lib/nvmf/nvmf.o 00:06:03.357 SO libspdk_ublk.so.3.0 00:06:03.357 SYMLINK libspdk_ublk.so 00:06:03.616 CC lib/nvmf/nvmf_rpc.o 00:06:03.876 CC lib/ftl/ftl_p2l_log.o 00:06:03.876 CC lib/ftl/mngt/ftl_mngt.o 00:06:03.876 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:04.135 CC lib/nvmf/transport.o 00:06:04.135 CC lib/nvmf/tcp.o 00:06:04.135 CC lib/nvmf/stubs.o 00:06:04.394 CC lib/nvmf/mdns_server.o 00:06:04.394 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:04.653 CC lib/nvmf/rdma.o 00:06:04.653 CC lib/nvmf/auth.o 00:06:04.653 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:05.252 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:05.252 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:05.252 CC lib/iscsi/conn.o 00:06:05.252 CC lib/iscsi/init_grp.o 00:06:05.252 CC lib/vhost/vhost.o 00:06:05.538 CC lib/vhost/vhost_rpc.o 00:06:05.538 CC lib/vhost/vhost_scsi.o 00:06:05.796 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:05.796 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:06.361 CC lib/vhost/vhost_blk.o 00:06:06.362 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:06.362 CC lib/iscsi/iscsi.o 00:06:06.362 CC lib/iscsi/param.o 00:06:06.620 CC lib/vhost/rte_vhost_user.o 00:06:06.620 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:06.879 CC lib/iscsi/portal_grp.o 00:06:06.879 CC lib/iscsi/tgt_node.o 00:06:06.879 CC lib/iscsi/iscsi_subsystem.o 00:06:06.879 CC lib/iscsi/iscsi_rpc.o 00:06:07.137 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:07.137 CC lib/iscsi/task.o 00:06:07.395 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:07.395 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:07.395 CC lib/ftl/utils/ftl_conf.o 00:06:07.653 CC lib/ftl/utils/ftl_md.o 00:06:07.653 CC lib/ftl/utils/ftl_mempool.o 00:06:07.653 CC lib/ftl/utils/ftl_bitmap.o 00:06:07.653 CC lib/ftl/utils/ftl_property.o 00:06:07.653 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:07.956 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:07.956 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:07.956 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:07.956 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:08.238 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:08.238 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:08.238 LIB libspdk_nvmf.a 00:06:08.238 LIB libspdk_vhost.a 00:06:08.238 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:08.238 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:08.496 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:08.496 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:08.496 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:08.496 SO libspdk_vhost.so.8.0 00:06:08.496 LIB libspdk_iscsi.a 00:06:08.496 SO libspdk_nvmf.so.20.0 00:06:08.496 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:08.496 SO libspdk_iscsi.so.8.0 00:06:08.496 SYMLINK libspdk_vhost.so 00:06:08.753 CC lib/ftl/base/ftl_base_dev.o 00:06:08.753 CC lib/ftl/base/ftl_base_bdev.o 00:06:08.753 CC lib/ftl/ftl_trace.o 00:06:08.753 SYMLINK libspdk_nvmf.so 00:06:09.046 SYMLINK libspdk_iscsi.so 00:06:09.046 LIB libspdk_ftl.a 00:06:09.307 SO libspdk_ftl.so.9.0 00:06:09.872 SYMLINK libspdk_ftl.so 00:06:10.129 CC module/env_dpdk/env_dpdk_rpc.o 00:06:10.129 CC module/keyring/linux/keyring.o 00:06:10.129 CC module/keyring/file/keyring.o 00:06:10.387 CC module/accel/dsa/accel_dsa.o 00:06:10.387 CC module/sock/posix/posix.o 00:06:10.387 CC module/blob/bdev/blob_bdev.o 00:06:10.387 CC module/fsdev/aio/fsdev_aio.o 00:06:10.387 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:10.387 CC module/accel/error/accel_error.o 00:06:10.387 CC module/accel/ioat/accel_ioat.o 00:06:10.387 LIB libspdk_env_dpdk_rpc.a 00:06:10.387 SO libspdk_env_dpdk_rpc.so.6.0 00:06:10.658 CC module/keyring/linux/keyring_rpc.o 00:06:10.658 SYMLINK libspdk_env_dpdk_rpc.so 00:06:10.658 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:10.658 CC module/keyring/file/keyring_rpc.o 00:06:10.658 CC module/accel/error/accel_error_rpc.o 00:06:10.658 CC module/accel/ioat/accel_ioat_rpc.o 00:06:10.658 LIB libspdk_blob_bdev.a 00:06:10.658 LIB libspdk_scheduler_dynamic.a 00:06:10.658 SO libspdk_blob_bdev.so.11.0 00:06:10.658 SO libspdk_scheduler_dynamic.so.4.0 00:06:10.658 CC module/accel/dsa/accel_dsa_rpc.o 00:06:10.917 LIB libspdk_keyring_linux.a 00:06:10.917 LIB libspdk_accel_error.a 00:06:10.917 SYMLINK libspdk_scheduler_dynamic.so 00:06:10.917 SO libspdk_keyring_linux.so.1.0 00:06:10.917 LIB libspdk_keyring_file.a 00:06:10.917 SYMLINK libspdk_blob_bdev.so 00:06:10.917 LIB libspdk_accel_ioat.a 00:06:10.917 SO libspdk_accel_error.so.2.0 00:06:10.917 SO libspdk_keyring_file.so.2.0 00:06:10.917 SYMLINK libspdk_keyring_linux.so 00:06:10.917 SO libspdk_accel_ioat.so.6.0 00:06:10.917 SYMLINK libspdk_accel_error.so 00:06:10.917 SYMLINK libspdk_keyring_file.so 00:06:10.917 CC module/fsdev/aio/linux_aio_mgr.o 00:06:11.175 SYMLINK libspdk_accel_ioat.so 00:06:11.175 LIB libspdk_accel_dsa.a 00:06:11.175 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:11.175 SO libspdk_accel_dsa.so.5.0 00:06:11.175 CC module/scheduler/gscheduler/gscheduler.o 00:06:11.175 SYMLINK libspdk_accel_dsa.so 00:06:11.436 CC module/accel/iaa/accel_iaa.o 00:06:11.436 CC module/bdev/delay/vbdev_delay.o 00:06:11.436 CC module/blobfs/bdev/blobfs_bdev.o 00:06:11.436 LIB libspdk_scheduler_dpdk_governor.a 00:06:11.436 CC module/bdev/error/vbdev_error.o 00:06:11.436 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:11.436 LIB libspdk_scheduler_gscheduler.a 00:06:11.436 LIB libspdk_fsdev_aio.a 00:06:11.436 LIB libspdk_sock_posix.a 00:06:11.436 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:11.436 SO libspdk_scheduler_gscheduler.so.4.0 00:06:11.436 CC module/bdev/error/vbdev_error_rpc.o 00:06:11.436 SO libspdk_sock_posix.so.6.0 00:06:11.436 SO libspdk_fsdev_aio.so.1.0 00:06:11.436 CC module/bdev/lvol/vbdev_lvol.o 00:06:11.436 CC module/bdev/gpt/gpt.o 00:06:11.696 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:11.696 SYMLINK libspdk_scheduler_gscheduler.so 00:06:11.696 SYMLINK libspdk_sock_posix.so 00:06:11.696 CC module/bdev/gpt/vbdev_gpt.o 00:06:11.696 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:11.696 SYMLINK libspdk_fsdev_aio.so 00:06:11.696 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:11.696 CC module/accel/iaa/accel_iaa_rpc.o 00:06:11.696 LIB libspdk_blobfs_bdev.a 00:06:11.955 SO libspdk_blobfs_bdev.so.6.0 00:06:11.955 CC module/bdev/malloc/bdev_malloc.o 00:06:11.955 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:11.955 SYMLINK libspdk_blobfs_bdev.so 00:06:11.955 LIB libspdk_bdev_gpt.a 00:06:11.955 LIB libspdk_bdev_error.a 00:06:11.956 LIB libspdk_bdev_delay.a 00:06:11.956 LIB libspdk_accel_iaa.a 00:06:11.956 SO libspdk_bdev_error.so.6.0 00:06:11.956 CC module/bdev/null/bdev_null.o 00:06:11.956 SO libspdk_bdev_gpt.so.6.0 00:06:12.215 SO libspdk_bdev_delay.so.6.0 00:06:12.215 SO libspdk_accel_iaa.so.3.0 00:06:12.215 CC module/bdev/null/bdev_null_rpc.o 00:06:12.215 SYMLINK libspdk_bdev_error.so 00:06:12.215 CC module/bdev/nvme/bdev_nvme.o 00:06:12.215 SYMLINK libspdk_bdev_gpt.so 00:06:12.215 SYMLINK libspdk_bdev_delay.so 00:06:12.215 SYMLINK libspdk_accel_iaa.so 00:06:12.477 CC module/bdev/passthru/vbdev_passthru.o 00:06:12.477 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:12.477 CC module/bdev/split/vbdev_split.o 00:06:12.477 CC module/bdev/raid/bdev_raid.o 00:06:12.477 LIB libspdk_bdev_null.a 00:06:12.477 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:12.477 LIB libspdk_bdev_malloc.a 00:06:12.477 SO libspdk_bdev_null.so.6.0 00:06:12.477 SO libspdk_bdev_malloc.so.6.0 00:06:12.477 CC module/bdev/xnvme/bdev_xnvme.o 00:06:12.739 SYMLINK libspdk_bdev_null.so 00:06:12.739 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:12.739 LIB libspdk_bdev_lvol.a 00:06:12.739 SYMLINK libspdk_bdev_malloc.so 00:06:12.739 CC module/bdev/nvme/nvme_rpc.o 00:06:12.739 SO libspdk_bdev_lvol.so.6.0 00:06:12.997 SYMLINK libspdk_bdev_lvol.so 00:06:12.997 CC module/bdev/nvme/bdev_mdns_client.o 00:06:12.997 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:12.997 CC module/bdev/nvme/vbdev_opal.o 00:06:12.997 CC module/bdev/split/vbdev_split_rpc.o 00:06:12.997 LIB libspdk_bdev_zone_block.a 00:06:12.997 SO libspdk_bdev_zone_block.so.6.0 00:06:12.997 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:06:12.997 SYMLINK libspdk_bdev_zone_block.so 00:06:13.256 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:13.256 LIB libspdk_bdev_passthru.a 00:06:13.256 LIB libspdk_bdev_split.a 00:06:13.256 SO libspdk_bdev_passthru.so.6.0 00:06:13.256 SO libspdk_bdev_split.so.6.0 00:06:13.256 SYMLINK libspdk_bdev_passthru.so 00:06:13.256 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:13.256 LIB libspdk_bdev_xnvme.a 00:06:13.256 CC module/bdev/raid/bdev_raid_rpc.o 00:06:13.256 CC module/bdev/aio/bdev_aio.o 00:06:13.515 SYMLINK libspdk_bdev_split.so 00:06:13.515 CC module/bdev/aio/bdev_aio_rpc.o 00:06:13.515 SO libspdk_bdev_xnvme.so.3.0 00:06:13.515 CC module/bdev/raid/bdev_raid_sb.o 00:06:13.515 CC module/bdev/raid/raid0.o 00:06:13.515 CC module/bdev/ftl/bdev_ftl.o 00:06:13.515 SYMLINK libspdk_bdev_xnvme.so 00:06:13.515 CC module/bdev/raid/raid1.o 00:06:13.515 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:13.775 CC module/bdev/raid/concat.o 00:06:13.775 LIB libspdk_bdev_aio.a 00:06:13.775 CC module/bdev/iscsi/bdev_iscsi.o 00:06:13.775 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:14.033 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:14.033 LIB libspdk_bdev_ftl.a 00:06:14.033 SO libspdk_bdev_aio.so.6.0 00:06:14.033 SO libspdk_bdev_ftl.so.6.0 00:06:14.033 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:14.033 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:14.033 SYMLINK libspdk_bdev_ftl.so 00:06:14.033 SYMLINK libspdk_bdev_aio.so 00:06:14.292 LIB libspdk_bdev_raid.a 00:06:14.292 LIB libspdk_bdev_iscsi.a 00:06:14.550 SO libspdk_bdev_raid.so.6.0 00:06:14.550 SO libspdk_bdev_iscsi.so.6.0 00:06:14.550 SYMLINK libspdk_bdev_iscsi.so 00:06:14.550 SYMLINK libspdk_bdev_raid.so 00:06:14.550 LIB libspdk_bdev_virtio.a 00:06:14.807 SO libspdk_bdev_virtio.so.6.0 00:06:14.807 SYMLINK libspdk_bdev_virtio.so 00:06:16.750 LIB libspdk_bdev_nvme.a 00:06:16.750 SO libspdk_bdev_nvme.so.7.1 00:06:16.750 SYMLINK libspdk_bdev_nvme.so 00:06:17.316 CC module/event/subsystems/fsdev/fsdev.o 00:06:17.316 CC module/event/subsystems/vmd/vmd.o 00:06:17.316 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:17.317 CC module/event/subsystems/scheduler/scheduler.o 00:06:17.317 CC module/event/subsystems/keyring/keyring.o 00:06:17.317 CC module/event/subsystems/sock/sock.o 00:06:17.317 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:17.317 CC module/event/subsystems/iobuf/iobuf.o 00:06:17.317 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:17.317 LIB libspdk_event_scheduler.a 00:06:17.317 LIB libspdk_event_sock.a 00:06:17.317 LIB libspdk_event_keyring.a 00:06:17.575 LIB libspdk_event_vhost_blk.a 00:06:17.575 SO libspdk_event_scheduler.so.4.0 00:06:17.575 SO libspdk_event_sock.so.5.0 00:06:17.575 SO libspdk_event_keyring.so.1.0 00:06:17.575 LIB libspdk_event_vmd.a 00:06:17.575 LIB libspdk_event_fsdev.a 00:06:17.575 SO libspdk_event_vhost_blk.so.3.0 00:06:17.575 LIB libspdk_event_iobuf.a 00:06:17.575 SO libspdk_event_fsdev.so.1.0 00:06:17.575 SO libspdk_event_vmd.so.6.0 00:06:17.575 SYMLINK libspdk_event_keyring.so 00:06:17.575 SYMLINK libspdk_event_scheduler.so 00:06:17.575 SYMLINK libspdk_event_sock.so 00:06:17.575 SO libspdk_event_iobuf.so.3.0 00:06:17.575 SYMLINK libspdk_event_vhost_blk.so 00:06:17.575 SYMLINK libspdk_event_fsdev.so 00:06:17.575 SYMLINK libspdk_event_vmd.so 00:06:17.575 SYMLINK libspdk_event_iobuf.so 00:06:17.832 CC module/event/subsystems/accel/accel.o 00:06:18.088 LIB libspdk_event_accel.a 00:06:18.088 SO libspdk_event_accel.so.6.0 00:06:18.345 SYMLINK libspdk_event_accel.so 00:06:18.604 CC module/event/subsystems/bdev/bdev.o 00:06:18.863 LIB libspdk_event_bdev.a 00:06:18.863 SO libspdk_event_bdev.so.6.0 00:06:18.863 SYMLINK libspdk_event_bdev.so 00:06:19.127 CC module/event/subsystems/ublk/ublk.o 00:06:19.127 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:19.127 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:19.127 CC module/event/subsystems/scsi/scsi.o 00:06:19.127 CC module/event/subsystems/nbd/nbd.o 00:06:19.384 LIB libspdk_event_scsi.a 00:06:19.384 LIB libspdk_event_nbd.a 00:06:19.384 SO libspdk_event_nbd.so.6.0 00:06:19.384 SO libspdk_event_scsi.so.6.0 00:06:19.384 LIB libspdk_event_ublk.a 00:06:19.384 SO libspdk_event_ublk.so.3.0 00:06:19.384 SYMLINK libspdk_event_nbd.so 00:06:19.384 LIB libspdk_event_nvmf.a 00:06:19.384 SYMLINK libspdk_event_scsi.so 00:06:19.384 SO libspdk_event_nvmf.so.6.0 00:06:19.384 SYMLINK libspdk_event_ublk.so 00:06:19.384 SYMLINK libspdk_event_nvmf.so 00:06:19.641 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:19.641 CC module/event/subsystems/iscsi/iscsi.o 00:06:19.899 LIB libspdk_event_vhost_scsi.a 00:06:19.899 LIB libspdk_event_iscsi.a 00:06:19.899 SO libspdk_event_vhost_scsi.so.3.0 00:06:19.899 SO libspdk_event_iscsi.so.6.0 00:06:19.899 SYMLINK libspdk_event_vhost_scsi.so 00:06:19.899 SYMLINK libspdk_event_iscsi.so 00:06:20.157 SO libspdk.so.6.0 00:06:20.157 SYMLINK libspdk.so 00:06:20.416 CC app/trace_record/trace_record.o 00:06:20.416 CXX app/trace/trace.o 00:06:20.416 TEST_HEADER include/spdk/accel.h 00:06:20.416 TEST_HEADER include/spdk/accel_module.h 00:06:20.416 TEST_HEADER include/spdk/assert.h 00:06:20.416 TEST_HEADER include/spdk/barrier.h 00:06:20.416 TEST_HEADER include/spdk/base64.h 00:06:20.416 TEST_HEADER include/spdk/bdev.h 00:06:20.416 TEST_HEADER include/spdk/bdev_module.h 00:06:20.416 TEST_HEADER include/spdk/bdev_zone.h 00:06:20.416 TEST_HEADER include/spdk/bit_array.h 00:06:20.416 TEST_HEADER include/spdk/bit_pool.h 00:06:20.416 TEST_HEADER include/spdk/blob_bdev.h 00:06:20.416 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:20.416 TEST_HEADER include/spdk/blobfs.h 00:06:20.416 TEST_HEADER include/spdk/blob.h 00:06:20.416 TEST_HEADER include/spdk/conf.h 00:06:20.416 TEST_HEADER include/spdk/config.h 00:06:20.416 TEST_HEADER include/spdk/cpuset.h 00:06:20.416 TEST_HEADER include/spdk/crc16.h 00:06:20.416 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:20.416 TEST_HEADER include/spdk/crc32.h 00:06:20.416 TEST_HEADER include/spdk/crc64.h 00:06:20.416 TEST_HEADER include/spdk/dif.h 00:06:20.416 TEST_HEADER include/spdk/dma.h 00:06:20.416 TEST_HEADER include/spdk/endian.h 00:06:20.416 TEST_HEADER include/spdk/env_dpdk.h 00:06:20.416 TEST_HEADER include/spdk/env.h 00:06:20.416 TEST_HEADER include/spdk/event.h 00:06:20.416 TEST_HEADER include/spdk/fd_group.h 00:06:20.416 TEST_HEADER include/spdk/fd.h 00:06:20.416 TEST_HEADER include/spdk/file.h 00:06:20.416 TEST_HEADER include/spdk/fsdev.h 00:06:20.416 TEST_HEADER include/spdk/fsdev_module.h 00:06:20.416 TEST_HEADER include/spdk/ftl.h 00:06:20.416 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:20.416 TEST_HEADER include/spdk/gpt_spec.h 00:06:20.416 TEST_HEADER include/spdk/hexlify.h 00:06:20.416 TEST_HEADER include/spdk/histogram_data.h 00:06:20.416 TEST_HEADER include/spdk/idxd.h 00:06:20.416 TEST_HEADER include/spdk/idxd_spec.h 00:06:20.674 TEST_HEADER include/spdk/init.h 00:06:20.674 TEST_HEADER include/spdk/ioat.h 00:06:20.674 CC examples/util/zipf/zipf.o 00:06:20.674 TEST_HEADER include/spdk/ioat_spec.h 00:06:20.674 TEST_HEADER include/spdk/iscsi_spec.h 00:06:20.674 CC test/thread/poller_perf/poller_perf.o 00:06:20.674 CC examples/ioat/perf/perf.o 00:06:20.674 TEST_HEADER include/spdk/json.h 00:06:20.674 TEST_HEADER include/spdk/jsonrpc.h 00:06:20.674 TEST_HEADER include/spdk/keyring.h 00:06:20.674 TEST_HEADER include/spdk/keyring_module.h 00:06:20.674 TEST_HEADER include/spdk/likely.h 00:06:20.674 CC test/app/bdev_svc/bdev_svc.o 00:06:20.674 TEST_HEADER include/spdk/log.h 00:06:20.674 TEST_HEADER include/spdk/lvol.h 00:06:20.674 TEST_HEADER include/spdk/md5.h 00:06:20.674 TEST_HEADER include/spdk/memory.h 00:06:20.674 TEST_HEADER include/spdk/mmio.h 00:06:20.674 CC test/dma/test_dma/test_dma.o 00:06:20.674 TEST_HEADER include/spdk/nbd.h 00:06:20.674 TEST_HEADER include/spdk/net.h 00:06:20.674 TEST_HEADER include/spdk/notify.h 00:06:20.674 TEST_HEADER include/spdk/nvme.h 00:06:20.674 TEST_HEADER include/spdk/nvme_intel.h 00:06:20.674 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:20.674 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:20.674 TEST_HEADER include/spdk/nvme_spec.h 00:06:20.674 TEST_HEADER include/spdk/nvme_zns.h 00:06:20.674 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:20.674 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:20.674 TEST_HEADER include/spdk/nvmf.h 00:06:20.674 TEST_HEADER include/spdk/nvmf_spec.h 00:06:20.674 TEST_HEADER include/spdk/nvmf_transport.h 00:06:20.674 TEST_HEADER include/spdk/opal.h 00:06:20.674 TEST_HEADER include/spdk/opal_spec.h 00:06:20.674 TEST_HEADER include/spdk/pci_ids.h 00:06:20.674 TEST_HEADER include/spdk/pipe.h 00:06:20.674 TEST_HEADER include/spdk/queue.h 00:06:20.674 TEST_HEADER include/spdk/reduce.h 00:06:20.674 CC test/env/mem_callbacks/mem_callbacks.o 00:06:20.674 TEST_HEADER include/spdk/rpc.h 00:06:20.674 TEST_HEADER include/spdk/scheduler.h 00:06:20.674 TEST_HEADER include/spdk/scsi.h 00:06:20.674 TEST_HEADER include/spdk/scsi_spec.h 00:06:20.674 TEST_HEADER include/spdk/sock.h 00:06:20.674 TEST_HEADER include/spdk/stdinc.h 00:06:20.674 TEST_HEADER include/spdk/string.h 00:06:20.674 TEST_HEADER include/spdk/thread.h 00:06:20.674 TEST_HEADER include/spdk/trace.h 00:06:20.674 TEST_HEADER include/spdk/trace_parser.h 00:06:20.674 TEST_HEADER include/spdk/tree.h 00:06:20.674 TEST_HEADER include/spdk/ublk.h 00:06:20.674 TEST_HEADER include/spdk/util.h 00:06:20.674 TEST_HEADER include/spdk/uuid.h 00:06:20.674 TEST_HEADER include/spdk/version.h 00:06:20.674 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:20.674 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:20.674 TEST_HEADER include/spdk/vhost.h 00:06:20.674 TEST_HEADER include/spdk/vmd.h 00:06:20.674 TEST_HEADER include/spdk/xor.h 00:06:20.674 TEST_HEADER include/spdk/zipf.h 00:06:20.674 CXX test/cpp_headers/accel.o 00:06:20.674 LINK spdk_trace_record 00:06:20.933 LINK interrupt_tgt 00:06:20.933 LINK bdev_svc 00:06:20.933 LINK ioat_perf 00:06:20.933 LINK poller_perf 00:06:20.933 LINK zipf 00:06:20.933 LINK spdk_trace 00:06:20.933 CXX test/cpp_headers/accel_module.o 00:06:21.190 CC test/env/vtophys/vtophys.o 00:06:21.190 CC examples/ioat/verify/verify.o 00:06:21.190 LINK mem_callbacks 00:06:21.190 CXX test/cpp_headers/assert.o 00:06:21.190 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:21.190 CC app/nvmf_tgt/nvmf_main.o 00:06:21.448 LINK vtophys 00:06:21.448 CC app/iscsi_tgt/iscsi_tgt.o 00:06:21.448 CC app/spdk_tgt/spdk_tgt.o 00:06:21.448 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:21.448 LINK test_dma 00:06:21.448 CXX test/cpp_headers/barrier.o 00:06:21.448 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:21.448 LINK nvmf_tgt 00:06:21.762 LINK verify 00:06:21.762 LINK env_dpdk_post_init 00:06:21.762 LINK iscsi_tgt 00:06:21.762 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:21.762 CXX test/cpp_headers/base64.o 00:06:21.762 CXX test/cpp_headers/bdev.o 00:06:21.762 LINK spdk_tgt 00:06:22.021 CXX test/cpp_headers/bdev_module.o 00:06:22.021 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:22.021 LINK nvme_fuzz 00:06:22.021 CC test/env/memory/memory_ut.o 00:06:22.279 CC app/spdk_lspci/spdk_lspci.o 00:06:22.279 CXX test/cpp_headers/bdev_zone.o 00:06:22.279 CXX test/cpp_headers/bit_array.o 00:06:22.279 CC examples/thread/thread/thread_ex.o 00:06:22.279 CXX test/cpp_headers/bit_pool.o 00:06:22.279 CC test/app/histogram_perf/histogram_perf.o 00:06:22.279 CC test/rpc_client/rpc_client_test.o 00:06:22.279 LINK spdk_lspci 00:06:22.539 CXX test/cpp_headers/blob_bdev.o 00:06:22.539 LINK rpc_client_test 00:06:22.539 LINK vhost_fuzz 00:06:22.539 CC test/accel/dif/dif.o 00:06:22.539 LINK histogram_perf 00:06:22.796 CC app/spdk_nvme_perf/perf.o 00:06:22.796 CXX test/cpp_headers/blobfs_bdev.o 00:06:22.796 LINK thread 00:06:22.796 CXX test/cpp_headers/blobfs.o 00:06:22.797 CXX test/cpp_headers/blob.o 00:06:22.797 CC examples/sock/hello_world/hello_sock.o 00:06:22.797 CXX test/cpp_headers/conf.o 00:06:23.055 CXX test/cpp_headers/config.o 00:06:23.055 CXX test/cpp_headers/cpuset.o 00:06:23.055 CC test/app/jsoncat/jsoncat.o 00:06:23.313 CC examples/vmd/lsvmd/lsvmd.o 00:06:23.313 LINK hello_sock 00:06:23.313 CXX test/cpp_headers/crc16.o 00:06:23.313 CC examples/idxd/perf/perf.o 00:06:23.313 LINK jsoncat 00:06:23.585 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:23.585 LINK lsvmd 00:06:23.585 CXX test/cpp_headers/crc32.o 00:06:23.585 LINK memory_ut 00:06:23.585 CC examples/vmd/led/led.o 00:06:23.844 CXX test/cpp_headers/crc64.o 00:06:23.844 LINK hello_fsdev 00:06:23.844 LINK spdk_nvme_perf 00:06:23.844 CC test/blobfs/mkfs/mkfs.o 00:06:23.844 LINK dif 00:06:24.102 LINK idxd_perf 00:06:24.102 LINK led 00:06:24.102 LINK iscsi_fuzz 00:06:24.102 CC test/env/pci/pci_ut.o 00:06:24.102 CXX test/cpp_headers/dif.o 00:06:24.102 CC examples/accel/perf/accel_perf.o 00:06:24.102 LINK mkfs 00:06:24.102 CC app/spdk_nvme_identify/identify.o 00:06:24.361 CC test/event/event_perf/event_perf.o 00:06:24.361 CXX test/cpp_headers/dma.o 00:06:24.361 CC app/spdk_nvme_discover/discovery_aer.o 00:06:24.361 CC app/spdk_top/spdk_top.o 00:06:24.361 CXX test/cpp_headers/endian.o 00:06:24.361 CC test/app/stub/stub.o 00:06:24.619 LINK event_perf 00:06:24.619 CXX test/cpp_headers/env_dpdk.o 00:06:24.619 CC test/lvol/esnap/esnap.o 00:06:24.619 LINK pci_ut 00:06:24.619 LINK spdk_nvme_discover 00:06:24.877 LINK stub 00:06:24.877 CC test/event/reactor/reactor.o 00:06:24.877 CC examples/blob/hello_world/hello_blob.o 00:06:24.877 CXX test/cpp_headers/env.o 00:06:24.877 LINK reactor 00:06:25.134 CXX test/cpp_headers/event.o 00:06:25.134 LINK hello_blob 00:06:25.134 CC examples/blob/cli/blobcli.o 00:06:25.134 LINK accel_perf 00:06:25.134 CC app/vhost/vhost.o 00:06:25.134 CC app/spdk_dd/spdk_dd.o 00:06:25.391 CC test/event/reactor_perf/reactor_perf.o 00:06:25.391 CXX test/cpp_headers/fd_group.o 00:06:25.391 LINK vhost 00:06:25.391 LINK reactor_perf 00:06:25.391 CXX test/cpp_headers/fd.o 00:06:25.649 CC test/event/app_repeat/app_repeat.o 00:06:25.649 CC test/event/scheduler/scheduler.o 00:06:25.649 LINK spdk_nvme_identify 00:06:25.649 CXX test/cpp_headers/file.o 00:06:25.649 LINK blobcli 00:06:25.649 LINK app_repeat 00:06:25.907 LINK spdk_top 00:06:25.907 LINK scheduler 00:06:25.907 CC app/fio/nvme/fio_plugin.o 00:06:25.907 LINK spdk_dd 00:06:25.907 CXX test/cpp_headers/fsdev.o 00:06:25.907 CC test/nvme/aer/aer.o 00:06:26.164 CC test/nvme/sgl/sgl.o 00:06:26.164 CC test/nvme/reset/reset.o 00:06:26.164 CC test/nvme/e2edp/nvme_dp.o 00:06:26.164 CXX test/cpp_headers/fsdev_module.o 00:06:26.422 CC examples/nvme/hello_world/hello_world.o 00:06:26.422 CC app/fio/bdev/fio_plugin.o 00:06:26.422 CC examples/bdev/hello_world/hello_bdev.o 00:06:26.422 LINK aer 00:06:26.422 CXX test/cpp_headers/ftl.o 00:06:26.422 LINK reset 00:06:26.422 LINK sgl 00:06:26.422 LINK nvme_dp 00:06:26.681 LINK spdk_nvme 00:06:26.681 LINK hello_world 00:06:26.681 CXX test/cpp_headers/fuse_dispatcher.o 00:06:26.681 LINK hello_bdev 00:06:26.681 CC examples/nvme/reconnect/reconnect.o 00:06:26.681 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:26.681 CC examples/nvme/arbitration/arbitration.o 00:06:26.681 CC test/nvme/overhead/overhead.o 00:06:26.940 CC test/nvme/err_injection/err_injection.o 00:06:26.940 CXX test/cpp_headers/gpt_spec.o 00:06:26.940 LINK spdk_bdev 00:06:26.940 CC examples/bdev/bdevperf/bdevperf.o 00:06:26.940 CXX test/cpp_headers/hexlify.o 00:06:26.940 CC test/bdev/bdevio/bdevio.o 00:06:26.940 LINK err_injection 00:06:27.198 LINK reconnect 00:06:27.198 LINK overhead 00:06:27.198 CC examples/nvme/hotplug/hotplug.o 00:06:27.198 LINK arbitration 00:06:27.198 CXX test/cpp_headers/histogram_data.o 00:06:27.457 CXX test/cpp_headers/idxd.o 00:06:27.457 CC test/nvme/startup/startup.o 00:06:27.457 LINK nvme_manage 00:06:27.457 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:27.457 LINK bdevio 00:06:27.457 CC examples/nvme/abort/abort.o 00:06:27.457 CXX test/cpp_headers/idxd_spec.o 00:06:27.457 LINK startup 00:06:27.714 CC test/nvme/reserve/reserve.o 00:06:27.714 LINK hotplug 00:06:27.714 LINK cmb_copy 00:06:27.714 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:27.714 CXX test/cpp_headers/init.o 00:06:28.058 CC test/nvme/simple_copy/simple_copy.o 00:06:28.058 LINK pmr_persistence 00:06:28.058 LINK reserve 00:06:28.058 CC test/nvme/connect_stress/connect_stress.o 00:06:28.058 CC test/nvme/boot_partition/boot_partition.o 00:06:28.058 LINK abort 00:06:28.058 CC test/nvme/compliance/nvme_compliance.o 00:06:28.058 CXX test/cpp_headers/ioat.o 00:06:28.058 LINK bdevperf 00:06:28.058 LINK boot_partition 00:06:28.317 LINK simple_copy 00:06:28.317 LINK connect_stress 00:06:28.317 CXX test/cpp_headers/ioat_spec.o 00:06:28.317 CC test/nvme/fused_ordering/fused_ordering.o 00:06:28.317 CXX test/cpp_headers/iscsi_spec.o 00:06:28.317 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:28.317 CXX test/cpp_headers/json.o 00:06:28.577 CC test/nvme/fdp/fdp.o 00:06:28.577 CC test/nvme/cuse/cuse.o 00:06:28.577 CXX test/cpp_headers/jsonrpc.o 00:06:28.577 CXX test/cpp_headers/keyring.o 00:06:28.577 LINK doorbell_aers 00:06:28.577 CXX test/cpp_headers/keyring_module.o 00:06:28.577 CC examples/nvmf/nvmf/nvmf.o 00:06:28.834 LINK fused_ordering 00:06:28.834 CXX test/cpp_headers/likely.o 00:06:28.834 LINK nvme_compliance 00:06:28.834 CXX test/cpp_headers/log.o 00:06:28.834 CXX test/cpp_headers/lvol.o 00:06:28.834 LINK fdp 00:06:29.092 CXX test/cpp_headers/md5.o 00:06:29.092 CXX test/cpp_headers/memory.o 00:06:29.092 CXX test/cpp_headers/mmio.o 00:06:29.092 LINK nvmf 00:06:29.092 CXX test/cpp_headers/nbd.o 00:06:29.092 CXX test/cpp_headers/net.o 00:06:29.092 CXX test/cpp_headers/notify.o 00:06:29.092 CXX test/cpp_headers/nvme.o 00:06:29.092 CXX test/cpp_headers/nvme_intel.o 00:06:29.092 CXX test/cpp_headers/nvme_ocssd.o 00:06:29.350 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:29.350 CXX test/cpp_headers/nvme_spec.o 00:06:29.350 CXX test/cpp_headers/nvme_zns.o 00:06:29.350 CXX test/cpp_headers/nvmf_cmd.o 00:06:29.350 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:29.607 CXX test/cpp_headers/nvmf.o 00:06:29.607 CXX test/cpp_headers/nvmf_spec.o 00:06:29.607 CXX test/cpp_headers/nvmf_transport.o 00:06:29.607 CXX test/cpp_headers/opal.o 00:06:29.607 CXX test/cpp_headers/opal_spec.o 00:06:29.864 CXX test/cpp_headers/pci_ids.o 00:06:29.864 CXX test/cpp_headers/pipe.o 00:06:29.864 CXX test/cpp_headers/queue.o 00:06:29.864 CXX test/cpp_headers/reduce.o 00:06:29.864 CXX test/cpp_headers/rpc.o 00:06:29.864 CXX test/cpp_headers/scheduler.o 00:06:29.864 CXX test/cpp_headers/scsi.o 00:06:29.864 CXX test/cpp_headers/scsi_spec.o 00:06:29.864 CXX test/cpp_headers/sock.o 00:06:30.122 CXX test/cpp_headers/stdinc.o 00:06:30.122 CXX test/cpp_headers/string.o 00:06:30.122 CXX test/cpp_headers/thread.o 00:06:30.122 CXX test/cpp_headers/trace.o 00:06:30.122 CXX test/cpp_headers/trace_parser.o 00:06:30.122 CXX test/cpp_headers/tree.o 00:06:30.122 CXX test/cpp_headers/ublk.o 00:06:30.381 CXX test/cpp_headers/util.o 00:06:30.381 CXX test/cpp_headers/uuid.o 00:06:30.381 CXX test/cpp_headers/version.o 00:06:30.381 CXX test/cpp_headers/vfio_user_pci.o 00:06:30.381 CXX test/cpp_headers/vfio_user_spec.o 00:06:30.381 CXX test/cpp_headers/vhost.o 00:06:30.381 CXX test/cpp_headers/vmd.o 00:06:30.381 CXX test/cpp_headers/xor.o 00:06:30.381 CXX test/cpp_headers/zipf.o 00:06:30.640 LINK cuse 00:06:32.543 LINK esnap 00:06:33.111 00:06:33.111 real 1m51.544s 00:06:33.111 user 10m23.815s 00:06:33.111 sys 2m21.132s 00:06:33.111 11:20:38 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:33.111 11:20:38 make -- common/autotest_common.sh@10 -- $ set +x 00:06:33.111 ************************************ 00:06:33.111 END TEST make 00:06:33.111 ************************************ 00:06:33.111 11:20:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:33.111 11:20:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:33.111 11:20:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:33.111 11:20:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:33.111 11:20:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:33.111 11:20:38 -- pm/common@44 -- $ pid=5343 00:06:33.111 11:20:38 -- pm/common@50 -- $ kill -TERM 5343 00:06:33.111 11:20:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:33.111 11:20:38 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:33.111 11:20:38 -- pm/common@44 -- $ pid=5344 00:06:33.111 11:20:38 -- pm/common@50 -- $ kill -TERM 5344 00:06:33.111 11:20:38 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:33.111 11:20:38 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:33.370 11:20:38 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.370 11:20:38 -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.370 11:20:38 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.370 11:20:39 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.370 11:20:39 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.370 11:20:39 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.370 11:20:39 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.370 11:20:39 -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.370 11:20:39 -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.370 11:20:39 -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.370 11:20:39 -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.370 11:20:39 -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.370 11:20:39 -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.370 11:20:39 -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.370 11:20:39 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.370 11:20:39 -- scripts/common.sh@344 -- # case "$op" in 00:06:33.370 11:20:39 -- scripts/common.sh@345 -- # : 1 00:06:33.370 11:20:39 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.370 11:20:39 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.370 11:20:39 -- scripts/common.sh@365 -- # decimal 1 00:06:33.370 11:20:39 -- scripts/common.sh@353 -- # local d=1 00:06:33.370 11:20:39 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.370 11:20:39 -- scripts/common.sh@355 -- # echo 1 00:06:33.370 11:20:39 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.370 11:20:39 -- scripts/common.sh@366 -- # decimal 2 00:06:33.370 11:20:39 -- scripts/common.sh@353 -- # local d=2 00:06:33.370 11:20:39 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.370 11:20:39 -- scripts/common.sh@355 -- # echo 2 00:06:33.370 11:20:39 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.370 11:20:39 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.370 11:20:39 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.370 11:20:39 -- scripts/common.sh@368 -- # return 0 00:06:33.370 11:20:39 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.370 11:20:39 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.370 --rc genhtml_branch_coverage=1 00:06:33.370 --rc genhtml_function_coverage=1 00:06:33.370 --rc genhtml_legend=1 00:06:33.370 --rc geninfo_all_blocks=1 00:06:33.370 --rc geninfo_unexecuted_blocks=1 00:06:33.370 00:06:33.370 ' 00:06:33.370 11:20:39 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.370 --rc genhtml_branch_coverage=1 00:06:33.370 --rc genhtml_function_coverage=1 00:06:33.370 --rc genhtml_legend=1 00:06:33.370 --rc geninfo_all_blocks=1 00:06:33.370 --rc geninfo_unexecuted_blocks=1 00:06:33.370 00:06:33.370 ' 00:06:33.370 11:20:39 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.370 --rc genhtml_branch_coverage=1 00:06:33.370 --rc genhtml_function_coverage=1 00:06:33.370 --rc genhtml_legend=1 00:06:33.370 --rc geninfo_all_blocks=1 00:06:33.370 --rc geninfo_unexecuted_blocks=1 00:06:33.370 00:06:33.370 ' 00:06:33.370 11:20:39 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.370 --rc genhtml_branch_coverage=1 00:06:33.370 --rc genhtml_function_coverage=1 00:06:33.370 --rc genhtml_legend=1 00:06:33.370 --rc geninfo_all_blocks=1 00:06:33.370 --rc geninfo_unexecuted_blocks=1 00:06:33.370 00:06:33.370 ' 00:06:33.370 11:20:39 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:33.370 11:20:39 -- nvmf/common.sh@7 -- # uname -s 00:06:33.370 11:20:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.370 11:20:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.370 11:20:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.370 11:20:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.370 11:20:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.370 11:20:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.370 11:20:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.370 11:20:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.370 11:20:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.370 11:20:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.370 11:20:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:17e3f908-f4fb-4b01-a2dd-8d15d253729f 00:06:33.370 11:20:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=17e3f908-f4fb-4b01-a2dd-8d15d253729f 00:06:33.370 11:20:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.370 11:20:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.370 11:20:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:33.370 11:20:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.371 11:20:39 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:33.371 11:20:39 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:33.371 11:20:39 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.371 11:20:39 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.371 11:20:39 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.371 11:20:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.371 11:20:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.371 11:20:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.371 11:20:39 -- paths/export.sh@5 -- # export PATH 00:06:33.371 11:20:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.371 11:20:39 -- nvmf/common.sh@51 -- # : 0 00:06:33.371 11:20:39 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:33.371 11:20:39 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:33.371 11:20:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.371 11:20:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.371 11:20:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.371 11:20:39 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:33.371 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:33.371 11:20:39 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:33.371 11:20:39 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:33.371 11:20:39 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:33.371 11:20:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:33.371 11:20:39 -- spdk/autotest.sh@32 -- # uname -s 00:06:33.371 11:20:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:33.371 11:20:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:33.371 11:20:39 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:33.371 11:20:39 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:33.371 11:20:39 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:33.371 11:20:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:33.657 11:20:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:33.657 11:20:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:33.657 11:20:39 -- spdk/autotest.sh@48 -- # udevadm_pid=55066 00:06:33.657 11:20:39 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:33.657 11:20:39 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:33.657 11:20:39 -- pm/common@17 -- # local monitor 00:06:33.657 11:20:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:33.657 11:20:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:33.657 11:20:39 -- pm/common@25 -- # sleep 1 00:06:33.657 11:20:39 -- pm/common@21 -- # date +%s 00:06:33.657 11:20:39 -- pm/common@21 -- # date +%s 00:06:33.657 11:20:39 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732101639 00:06:33.657 11:20:39 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732101639 00:06:33.657 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732101639_collect-vmstat.pm.log 00:06:33.657 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732101639_collect-cpu-load.pm.log 00:06:34.596 11:20:40 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:34.596 11:20:40 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:34.596 11:20:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.596 11:20:40 -- common/autotest_common.sh@10 -- # set +x 00:06:34.596 11:20:40 -- spdk/autotest.sh@59 -- # create_test_list 00:06:34.596 11:20:40 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:34.596 11:20:40 -- common/autotest_common.sh@10 -- # set +x 00:06:34.596 11:20:40 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:34.596 11:20:40 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:34.596 11:20:40 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:34.596 11:20:40 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:34.596 11:20:40 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:34.596 11:20:40 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:34.596 11:20:40 -- common/autotest_common.sh@1457 -- # uname 00:06:34.596 11:20:40 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:34.596 11:20:40 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:34.596 11:20:40 -- common/autotest_common.sh@1477 -- # uname 00:06:34.596 11:20:40 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:34.596 11:20:40 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:34.596 11:20:40 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:34.596 lcov: LCOV version 1.15 00:06:34.596 11:20:40 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:56.565 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:56.565 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:11.436 11:21:16 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:11.436 11:21:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:11.436 11:21:16 -- common/autotest_common.sh@10 -- # set +x 00:07:11.436 11:21:16 -- spdk/autotest.sh@78 -- # rm -f 00:07:11.436 11:21:16 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:11.436 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:12.003 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:12.003 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:12.263 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:07:12.263 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:07:12.263 11:21:17 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:12.263 11:21:17 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:12.263 11:21:17 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:12.263 11:21:17 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:12.263 11:21:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:12.263 11:21:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:12.263 11:21:17 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:12.263 11:21:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:12.263 11:21:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:12.263 11:21:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:12.263 11:21:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:12.263 11:21:17 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:12.263 11:21:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:12.263 11:21:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:12.263 11:21:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:12.263 11:21:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:12.263 11:21:17 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:12.263 11:21:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:12.263 11:21:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:12.263 11:21:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:12.263 11:21:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:12.263 11:21:17 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:12.263 11:21:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:12.263 11:21:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:12.263 11:21:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:12.263 11:21:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:12.263 11:21:17 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:12.263 11:21:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:12.263 11:21:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:12.263 11:21:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:12.263 11:21:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:12.263 11:21:17 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:12.263 11:21:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:12.263 11:21:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:12.263 11:21:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:12.263 11:21:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:12.263 11:21:17 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:12.263 11:21:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:12.263 11:21:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:12.263 11:21:17 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:12.263 11:21:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:12.263 11:21:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:12.263 11:21:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:12.263 11:21:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:12.263 11:21:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:12.263 No valid GPT data, bailing 00:07:12.263 11:21:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:12.263 11:21:17 -- scripts/common.sh@394 -- # pt= 00:07:12.263 11:21:17 -- scripts/common.sh@395 -- # return 1 00:07:12.263 11:21:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:12.263 1+0 records in 00:07:12.263 1+0 records out 00:07:12.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127974 s, 81.9 MB/s 00:07:12.263 11:21:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:12.263 11:21:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:12.264 11:21:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:07:12.264 11:21:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:07:12.264 11:21:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:12.264 No valid GPT data, bailing 00:07:12.264 11:21:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:12.264 11:21:18 -- scripts/common.sh@394 -- # pt= 00:07:12.264 11:21:18 -- scripts/common.sh@395 -- # return 1 00:07:12.264 11:21:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:12.264 1+0 records in 00:07:12.264 1+0 records out 00:07:12.264 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00414669 s, 253 MB/s 00:07:12.264 11:21:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:12.264 11:21:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:12.264 11:21:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:07:12.264 11:21:18 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:07:12.264 11:21:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:07:12.531 No valid GPT data, bailing 00:07:12.531 11:21:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:07:12.531 11:21:18 -- scripts/common.sh@394 -- # pt= 00:07:12.531 11:21:18 -- scripts/common.sh@395 -- # return 1 00:07:12.531 11:21:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:07:12.531 1+0 records in 00:07:12.531 1+0 records out 00:07:12.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00420754 s, 249 MB/s 00:07:12.531 11:21:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:12.531 11:21:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:12.531 11:21:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:07:12.531 11:21:18 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:07:12.531 11:21:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:07:12.531 No valid GPT data, bailing 00:07:12.531 11:21:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:07:12.531 11:21:18 -- scripts/common.sh@394 -- # pt= 00:07:12.531 11:21:18 -- scripts/common.sh@395 -- # return 1 00:07:12.531 11:21:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:07:12.531 1+0 records in 00:07:12.531 1+0 records out 00:07:12.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00565016 s, 186 MB/s 00:07:12.531 11:21:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:12.531 11:21:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:12.531 11:21:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:07:12.531 11:21:18 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:07:12.531 11:21:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:07:12.531 No valid GPT data, bailing 00:07:12.531 11:21:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:07:12.531 11:21:18 -- scripts/common.sh@394 -- # pt= 00:07:12.531 11:21:18 -- scripts/common.sh@395 -- # return 1 00:07:12.531 11:21:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:07:12.789 1+0 records in 00:07:12.789 1+0 records out 00:07:12.789 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00599251 s, 175 MB/s 00:07:12.789 11:21:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:12.789 11:21:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:12.789 11:21:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:07:12.789 11:21:18 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:07:12.790 11:21:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:07:12.790 No valid GPT data, bailing 00:07:12.790 11:21:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:07:12.790 11:21:18 -- scripts/common.sh@394 -- # pt= 00:07:12.790 11:21:18 -- scripts/common.sh@395 -- # return 1 00:07:12.790 11:21:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:07:12.790 1+0 records in 00:07:12.790 1+0 records out 00:07:12.790 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0053211 s, 197 MB/s 00:07:12.790 11:21:18 -- spdk/autotest.sh@105 -- # sync 00:07:12.790 11:21:18 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:12.790 11:21:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:12.790 11:21:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:15.321 11:21:20 -- spdk/autotest.sh@111 -- # uname -s 00:07:15.321 11:21:20 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:15.321 11:21:20 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:15.321 11:21:20 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:15.580 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:16.147 Hugepages 00:07:16.147 node hugesize free / total 00:07:16.147 node0 1048576kB 0 / 0 00:07:16.147 node0 2048kB 0 / 0 00:07:16.147 00:07:16.147 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:16.147 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:16.445 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:16.445 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:07:16.445 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:07:16.445 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:07:16.445 11:21:22 -- spdk/autotest.sh@117 -- # uname -s 00:07:16.445 11:21:22 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:16.445 11:21:22 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:16.445 11:21:22 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:17.038 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:17.973 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:17.973 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:17.973 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:17.973 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:17.973 11:21:23 -- common/autotest_common.sh@1517 -- # sleep 1 00:07:19.347 11:21:24 -- common/autotest_common.sh@1518 -- # bdfs=() 00:07:19.347 11:21:24 -- common/autotest_common.sh@1518 -- # local bdfs 00:07:19.347 11:21:24 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:07:19.347 11:21:24 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:07:19.347 11:21:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:19.347 11:21:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:19.347 11:21:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:19.347 11:21:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:19.347 11:21:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:19.347 11:21:24 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:19.347 11:21:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:19.347 11:21:24 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:19.605 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:19.864 Waiting for block devices as requested 00:07:19.864 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:19.864 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:20.172 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:20.172 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:25.438 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:25.438 11:21:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:25.438 11:21:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:25.438 11:21:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:25.438 11:21:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:07:25.438 11:21:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:25.438 11:21:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:25.438 11:21:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:25.438 11:21:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:07:25.438 11:21:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:07:25.438 11:21:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:07:25.438 11:21:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:07:25.438 11:21:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:25.438 11:21:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:25.438 11:21:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:25.438 11:21:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:25.438 11:21:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:25.438 11:21:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:25.438 11:21:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:07:25.438 11:21:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:25.438 11:21:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:25.438 11:21:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:25.438 11:21:30 -- common/autotest_common.sh@1543 -- # continue 00:07:25.438 11:21:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:25.438 11:21:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:25.438 11:21:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:25.438 11:21:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:07:25.438 11:21:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:25.438 11:21:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:25.438 11:21:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:25.438 11:21:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:25.438 11:21:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:07:25.438 11:21:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:07:25.438 11:21:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:07:25.438 11:21:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:25.438 11:21:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:25.438 11:21:30 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:25.438 11:21:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:25.438 11:21:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:25.438 11:21:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:25.438 11:21:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:25.438 11:21:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:25.438 11:21:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:25.438 11:21:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:25.438 11:21:30 -- common/autotest_common.sh@1543 -- # continue 00:07:25.438 11:21:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:25.438 11:21:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:07:25.438 11:21:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:25.438 11:21:30 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:07:25.438 11:21:31 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:25.438 11:21:31 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:07:25.438 11:21:31 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:25.438 11:21:31 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:07:25.438 11:21:31 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:07:25.438 11:21:31 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:07:25.438 11:21:31 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:07:25.438 11:21:31 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:25.438 11:21:31 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:25.438 11:21:31 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:25.438 11:21:31 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:25.438 11:21:31 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:25.438 11:21:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:07:25.438 11:21:31 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:25.438 11:21:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:25.438 11:21:31 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:25.438 11:21:31 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:25.438 11:21:31 -- common/autotest_common.sh@1543 -- # continue 00:07:25.438 11:21:31 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:25.438 11:21:31 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:07:25.438 11:21:31 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:25.438 11:21:31 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:07:25.438 11:21:31 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:25.438 11:21:31 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:07:25.438 11:21:31 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:25.438 11:21:31 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:07:25.438 11:21:31 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:07:25.438 11:21:31 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:07:25.438 11:21:31 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:25.438 11:21:31 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:07:25.438 11:21:31 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:25.438 11:21:31 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:25.438 11:21:31 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:25.438 11:21:31 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:25.438 11:21:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:07:25.438 11:21:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:25.438 11:21:31 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:25.438 11:21:31 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:25.438 11:21:31 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:25.438 11:21:31 -- common/autotest_common.sh@1543 -- # continue 00:07:25.438 11:21:31 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:25.438 11:21:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:25.438 11:21:31 -- common/autotest_common.sh@10 -- # set +x 00:07:25.438 11:21:31 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:25.438 11:21:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.438 11:21:31 -- common/autotest_common.sh@10 -- # set +x 00:07:25.438 11:21:31 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:26.006 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:26.573 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:26.832 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:26.832 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:26.832 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:26.832 11:21:32 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:26.832 11:21:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:26.832 11:21:32 -- common/autotest_common.sh@10 -- # set +x 00:07:26.832 11:21:32 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:26.832 11:21:32 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:26.832 11:21:32 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:26.832 11:21:32 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:26.832 11:21:32 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:26.832 11:21:32 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:26.832 11:21:32 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:26.832 11:21:32 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:26.832 11:21:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:26.832 11:21:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:26.832 11:21:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:26.832 11:21:32 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:26.832 11:21:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:27.091 11:21:32 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:27.091 11:21:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:27.091 11:21:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:27.091 11:21:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:27.091 11:21:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:27.091 11:21:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:27.091 11:21:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:27.091 11:21:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:27.091 11:21:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:27.091 11:21:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:27.091 11:21:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:27.091 11:21:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:07:27.091 11:21:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:27.091 11:21:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:27.091 11:21:32 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:27.091 11:21:32 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:07:27.091 11:21:32 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:27.091 11:21:32 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:27.091 11:21:32 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:07:27.091 11:21:32 -- common/autotest_common.sh@1572 -- # return 0 00:07:27.091 11:21:32 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:07:27.091 11:21:32 -- common/autotest_common.sh@1580 -- # return 0 00:07:27.091 11:21:32 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:27.091 11:21:32 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:27.091 11:21:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:27.091 11:21:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:27.091 11:21:32 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:27.091 11:21:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:27.091 11:21:32 -- common/autotest_common.sh@10 -- # set +x 00:07:27.091 11:21:32 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:27.091 11:21:32 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:27.091 11:21:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.091 11:21:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.091 11:21:32 -- common/autotest_common.sh@10 -- # set +x 00:07:27.091 ************************************ 00:07:27.091 START TEST env 00:07:27.091 ************************************ 00:07:27.091 11:21:32 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:27.091 * Looking for test storage... 00:07:27.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:27.091 11:21:32 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.091 11:21:32 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.092 11:21:32 env -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.350 11:21:32 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.350 11:21:32 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.350 11:21:32 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.350 11:21:32 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.350 11:21:32 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.350 11:21:32 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.350 11:21:32 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.350 11:21:32 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.350 11:21:32 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.350 11:21:32 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.350 11:21:32 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.350 11:21:32 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.350 11:21:32 env -- scripts/common.sh@344 -- # case "$op" in 00:07:27.350 11:21:32 env -- scripts/common.sh@345 -- # : 1 00:07:27.350 11:21:32 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.350 11:21:32 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.350 11:21:32 env -- scripts/common.sh@365 -- # decimal 1 00:07:27.350 11:21:32 env -- scripts/common.sh@353 -- # local d=1 00:07:27.350 11:21:32 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.350 11:21:32 env -- scripts/common.sh@355 -- # echo 1 00:07:27.350 11:21:32 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.350 11:21:32 env -- scripts/common.sh@366 -- # decimal 2 00:07:27.350 11:21:32 env -- scripts/common.sh@353 -- # local d=2 00:07:27.350 11:21:32 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.350 11:21:32 env -- scripts/common.sh@355 -- # echo 2 00:07:27.350 11:21:32 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.350 11:21:32 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.350 11:21:32 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.350 11:21:32 env -- scripts/common.sh@368 -- # return 0 00:07:27.350 11:21:32 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.350 11:21:32 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.350 --rc genhtml_branch_coverage=1 00:07:27.350 --rc genhtml_function_coverage=1 00:07:27.350 --rc genhtml_legend=1 00:07:27.350 --rc geninfo_all_blocks=1 00:07:27.350 --rc geninfo_unexecuted_blocks=1 00:07:27.350 00:07:27.350 ' 00:07:27.350 11:21:32 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.350 --rc genhtml_branch_coverage=1 00:07:27.350 --rc genhtml_function_coverage=1 00:07:27.350 --rc genhtml_legend=1 00:07:27.350 --rc geninfo_all_blocks=1 00:07:27.350 --rc geninfo_unexecuted_blocks=1 00:07:27.350 00:07:27.350 ' 00:07:27.350 11:21:32 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.350 --rc genhtml_branch_coverage=1 00:07:27.350 --rc genhtml_function_coverage=1 00:07:27.350 --rc genhtml_legend=1 00:07:27.350 --rc geninfo_all_blocks=1 00:07:27.350 --rc geninfo_unexecuted_blocks=1 00:07:27.350 00:07:27.350 ' 00:07:27.350 11:21:32 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.350 --rc genhtml_branch_coverage=1 00:07:27.350 --rc genhtml_function_coverage=1 00:07:27.350 --rc genhtml_legend=1 00:07:27.350 --rc geninfo_all_blocks=1 00:07:27.350 --rc geninfo_unexecuted_blocks=1 00:07:27.350 00:07:27.350 ' 00:07:27.350 11:21:32 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:27.350 11:21:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.350 11:21:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.350 11:21:32 env -- common/autotest_common.sh@10 -- # set +x 00:07:27.350 ************************************ 00:07:27.350 START TEST env_memory 00:07:27.350 ************************************ 00:07:27.350 11:21:32 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:27.350 00:07:27.350 00:07:27.350 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.350 http://cunit.sourceforge.net/ 00:07:27.350 00:07:27.350 00:07:27.350 Suite: memory 00:07:27.350 Test: alloc and free memory map ...[2024-11-20 11:21:33.019845] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:27.350 passed 00:07:27.350 Test: mem map translation ...[2024-11-20 11:21:33.092108] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:27.350 [2024-11-20 11:21:33.092222] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:27.350 [2024-11-20 11:21:33.092335] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:27.350 [2024-11-20 11:21:33.092376] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:27.609 passed 00:07:27.609 Test: mem map registration ...[2024-11-20 11:21:33.203634] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:27.609 [2024-11-20 11:21:33.203734] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:27.609 passed 00:07:27.609 Test: mem map adjacent registrations ...passed 00:07:27.609 00:07:27.609 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.609 suites 1 1 n/a 0 0 00:07:27.609 tests 4 4 4 0 0 00:07:27.609 asserts 152 152 152 0 n/a 00:07:27.609 00:07:27.609 Elapsed time = 0.371 seconds 00:07:27.609 00:07:27.609 real 0m0.420s 00:07:27.609 user 0m0.368s 00:07:27.609 sys 0m0.044s 00:07:27.609 11:21:33 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.609 11:21:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:27.609 ************************************ 00:07:27.609 END TEST env_memory 00:07:27.609 ************************************ 00:07:27.868 11:21:33 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:27.868 11:21:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.868 11:21:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.868 11:21:33 env -- common/autotest_common.sh@10 -- # set +x 00:07:27.868 ************************************ 00:07:27.868 START TEST env_vtophys 00:07:27.868 ************************************ 00:07:27.868 11:21:33 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:27.868 EAL: lib.eal log level changed from notice to debug 00:07:27.868 EAL: Detected lcore 0 as core 0 on socket 0 00:07:27.868 EAL: Detected lcore 1 as core 0 on socket 0 00:07:27.868 EAL: Detected lcore 2 as core 0 on socket 0 00:07:27.868 EAL: Detected lcore 3 as core 0 on socket 0 00:07:27.868 EAL: Detected lcore 4 as core 0 on socket 0 00:07:27.868 EAL: Detected lcore 5 as core 0 on socket 0 00:07:27.868 EAL: Detected lcore 6 as core 0 on socket 0 00:07:27.868 EAL: Detected lcore 7 as core 0 on socket 0 00:07:27.868 EAL: Detected lcore 8 as core 0 on socket 0 00:07:27.868 EAL: Detected lcore 9 as core 0 on socket 0 00:07:27.868 EAL: Maximum logical cores by configuration: 128 00:07:27.868 EAL: Detected CPU lcores: 10 00:07:27.868 EAL: Detected NUMA nodes: 1 00:07:27.868 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:27.868 EAL: Detected shared linkage of DPDK 00:07:27.868 EAL: No shared files mode enabled, IPC will be disabled 00:07:27.868 EAL: Selected IOVA mode 'PA' 00:07:27.868 EAL: Probing VFIO support... 00:07:27.868 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:27.868 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:27.868 EAL: Ask a virtual area of 0x2e000 bytes 00:07:27.868 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:27.868 EAL: Setting up physically contiguous memory... 00:07:27.868 EAL: Setting maximum number of open files to 524288 00:07:27.868 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:27.868 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:27.868 EAL: Ask a virtual area of 0x61000 bytes 00:07:27.868 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:27.868 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:27.868 EAL: Ask a virtual area of 0x400000000 bytes 00:07:27.868 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:27.868 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:27.868 EAL: Ask a virtual area of 0x61000 bytes 00:07:27.868 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:27.868 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:27.868 EAL: Ask a virtual area of 0x400000000 bytes 00:07:27.868 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:27.868 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:27.868 EAL: Ask a virtual area of 0x61000 bytes 00:07:27.868 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:27.868 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:27.868 EAL: Ask a virtual area of 0x400000000 bytes 00:07:27.868 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:27.868 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:27.868 EAL: Ask a virtual area of 0x61000 bytes 00:07:27.868 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:27.868 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:27.868 EAL: Ask a virtual area of 0x400000000 bytes 00:07:27.868 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:27.868 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:27.868 EAL: Hugepages will be freed exactly as allocated. 00:07:27.868 EAL: No shared files mode enabled, IPC is disabled 00:07:27.868 EAL: No shared files mode enabled, IPC is disabled 00:07:27.868 EAL: TSC frequency is ~2100000 KHz 00:07:27.868 EAL: Main lcore 0 is ready (tid=7fe2fbafea40;cpuset=[0]) 00:07:27.868 EAL: Trying to obtain current memory policy. 00:07:27.868 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:27.868 EAL: Restoring previous memory policy: 0 00:07:27.868 EAL: request: mp_malloc_sync 00:07:27.868 EAL: No shared files mode enabled, IPC is disabled 00:07:27.868 EAL: Heap on socket 0 was expanded by 2MB 00:07:27.868 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:27.868 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:27.868 EAL: Mem event callback 'spdk:(nil)' registered 00:07:27.868 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:28.127 00:07:28.127 00:07:28.127 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.127 http://cunit.sourceforge.net/ 00:07:28.127 00:07:28.127 00:07:28.127 Suite: components_suite 00:07:28.696 Test: vtophys_malloc_test ...passed 00:07:28.696 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:28.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.696 EAL: Restoring previous memory policy: 4 00:07:28.696 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.696 EAL: request: mp_malloc_sync 00:07:28.696 EAL: No shared files mode enabled, IPC is disabled 00:07:28.696 EAL: Heap on socket 0 was expanded by 4MB 00:07:28.696 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.696 EAL: request: mp_malloc_sync 00:07:28.696 EAL: No shared files mode enabled, IPC is disabled 00:07:28.696 EAL: Heap on socket 0 was shrunk by 4MB 00:07:28.696 EAL: Trying to obtain current memory policy. 00:07:28.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.696 EAL: Restoring previous memory policy: 4 00:07:28.696 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.696 EAL: request: mp_malloc_sync 00:07:28.696 EAL: No shared files mode enabled, IPC is disabled 00:07:28.696 EAL: Heap on socket 0 was expanded by 6MB 00:07:28.696 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.696 EAL: request: mp_malloc_sync 00:07:28.696 EAL: No shared files mode enabled, IPC is disabled 00:07:28.696 EAL: Heap on socket 0 was shrunk by 6MB 00:07:28.696 EAL: Trying to obtain current memory policy. 00:07:28.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.696 EAL: Restoring previous memory policy: 4 00:07:28.696 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.696 EAL: request: mp_malloc_sync 00:07:28.696 EAL: No shared files mode enabled, IPC is disabled 00:07:28.696 EAL: Heap on socket 0 was expanded by 10MB 00:07:28.696 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.696 EAL: request: mp_malloc_sync 00:07:28.696 EAL: No shared files mode enabled, IPC is disabled 00:07:28.696 EAL: Heap on socket 0 was shrunk by 10MB 00:07:28.696 EAL: Trying to obtain current memory policy. 00:07:28.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.696 EAL: Restoring previous memory policy: 4 00:07:28.696 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.696 EAL: request: mp_malloc_sync 00:07:28.696 EAL: No shared files mode enabled, IPC is disabled 00:07:28.696 EAL: Heap on socket 0 was expanded by 18MB 00:07:28.696 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.696 EAL: request: mp_malloc_sync 00:07:28.696 EAL: No shared files mode enabled, IPC is disabled 00:07:28.696 EAL: Heap on socket 0 was shrunk by 18MB 00:07:28.696 EAL: Trying to obtain current memory policy. 00:07:28.696 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.696 EAL: Restoring previous memory policy: 4 00:07:28.696 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.696 EAL: request: mp_malloc_sync 00:07:28.696 EAL: No shared files mode enabled, IPC is disabled 00:07:28.696 EAL: Heap on socket 0 was expanded by 34MB 00:07:28.696 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.955 EAL: request: mp_malloc_sync 00:07:28.955 EAL: No shared files mode enabled, IPC is disabled 00:07:28.955 EAL: Heap on socket 0 was shrunk by 34MB 00:07:28.955 EAL: Trying to obtain current memory policy. 00:07:28.955 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.955 EAL: Restoring previous memory policy: 4 00:07:28.955 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.955 EAL: request: mp_malloc_sync 00:07:28.955 EAL: No shared files mode enabled, IPC is disabled 00:07:28.955 EAL: Heap on socket 0 was expanded by 66MB 00:07:28.955 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.955 EAL: request: mp_malloc_sync 00:07:28.955 EAL: No shared files mode enabled, IPC is disabled 00:07:28.955 EAL: Heap on socket 0 was shrunk by 66MB 00:07:29.214 EAL: Trying to obtain current memory policy. 00:07:29.214 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.214 EAL: Restoring previous memory policy: 4 00:07:29.214 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.214 EAL: request: mp_malloc_sync 00:07:29.214 EAL: No shared files mode enabled, IPC is disabled 00:07:29.214 EAL: Heap on socket 0 was expanded by 130MB 00:07:29.473 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.473 EAL: request: mp_malloc_sync 00:07:29.473 EAL: No shared files mode enabled, IPC is disabled 00:07:29.473 EAL: Heap on socket 0 was shrunk by 130MB 00:07:29.731 EAL: Trying to obtain current memory policy. 00:07:29.731 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.731 EAL: Restoring previous memory policy: 4 00:07:29.731 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.731 EAL: request: mp_malloc_sync 00:07:29.731 EAL: No shared files mode enabled, IPC is disabled 00:07:29.731 EAL: Heap on socket 0 was expanded by 258MB 00:07:30.298 EAL: Calling mem event callback 'spdk:(nil)' 00:07:30.298 EAL: request: mp_malloc_sync 00:07:30.298 EAL: No shared files mode enabled, IPC is disabled 00:07:30.298 EAL: Heap on socket 0 was shrunk by 258MB 00:07:30.866 EAL: Trying to obtain current memory policy. 00:07:30.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:31.124 EAL: Restoring previous memory policy: 4 00:07:31.124 EAL: Calling mem event callback 'spdk:(nil)' 00:07:31.124 EAL: request: mp_malloc_sync 00:07:31.124 EAL: No shared files mode enabled, IPC is disabled 00:07:31.124 EAL: Heap on socket 0 was expanded by 514MB 00:07:32.059 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.059 EAL: request: mp_malloc_sync 00:07:32.059 EAL: No shared files mode enabled, IPC is disabled 00:07:32.059 EAL: Heap on socket 0 was shrunk by 514MB 00:07:32.991 EAL: Trying to obtain current memory policy. 00:07:32.991 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:33.249 EAL: Restoring previous memory policy: 4 00:07:33.249 EAL: Calling mem event callback 'spdk:(nil)' 00:07:33.249 EAL: request: mp_malloc_sync 00:07:33.249 EAL: No shared files mode enabled, IPC is disabled 00:07:33.249 EAL: Heap on socket 0 was expanded by 1026MB 00:07:35.778 EAL: Calling mem event callback 'spdk:(nil)' 00:07:35.778 EAL: request: mp_malloc_sync 00:07:35.778 EAL: No shared files mode enabled, IPC is disabled 00:07:35.778 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:37.683 passed 00:07:37.683 00:07:37.683 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.683 suites 1 1 n/a 0 0 00:07:37.683 tests 2 2 2 0 0 00:07:37.683 asserts 5579 5579 5579 0 n/a 00:07:37.683 00:07:37.683 Elapsed time = 9.490 seconds 00:07:37.683 EAL: Calling mem event callback 'spdk:(nil)' 00:07:37.683 EAL: request: mp_malloc_sync 00:07:37.683 EAL: No shared files mode enabled, IPC is disabled 00:07:37.683 EAL: Heap on socket 0 was shrunk by 2MB 00:07:37.683 EAL: No shared files mode enabled, IPC is disabled 00:07:37.683 EAL: No shared files mode enabled, IPC is disabled 00:07:37.683 EAL: No shared files mode enabled, IPC is disabled 00:07:37.683 00:07:37.683 real 0m9.844s 00:07:37.683 user 0m8.703s 00:07:37.683 sys 0m0.968s 00:07:37.683 11:21:43 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.683 11:21:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:37.683 ************************************ 00:07:37.683 END TEST env_vtophys 00:07:37.683 ************************************ 00:07:37.683 11:21:43 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:37.683 11:21:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.683 11:21:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.683 11:21:43 env -- common/autotest_common.sh@10 -- # set +x 00:07:37.683 ************************************ 00:07:37.683 START TEST env_pci 00:07:37.683 ************************************ 00:07:37.683 11:21:43 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:37.683 00:07:37.683 00:07:37.683 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.683 http://cunit.sourceforge.net/ 00:07:37.683 00:07:37.683 00:07:37.683 Suite: pci 00:07:37.683 Test: pci_hook ...[2024-11-20 11:21:43.351890] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57978 has claimed it 00:07:37.683 EAL: Cannot find device (10000:00:01.0) 00:07:37.683 EAL: Failed to attach device on primary process 00:07:37.683 passed 00:07:37.683 00:07:37.683 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.683 suites 1 1 n/a 0 0 00:07:37.683 tests 1 1 1 0 0 00:07:37.683 asserts 25 25 25 0 n/a 00:07:37.683 00:07:37.683 Elapsed time = 0.009 seconds 00:07:37.683 00:07:37.683 real 0m0.083s 00:07:37.683 user 0m0.028s 00:07:37.683 sys 0m0.055s 00:07:37.683 11:21:43 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.683 11:21:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:37.683 ************************************ 00:07:37.683 END TEST env_pci 00:07:37.683 ************************************ 00:07:37.683 11:21:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:37.683 11:21:43 env -- env/env.sh@15 -- # uname 00:07:37.683 11:21:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:37.683 11:21:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:37.683 11:21:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:37.683 11:21:43 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:37.683 11:21:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.683 11:21:43 env -- common/autotest_common.sh@10 -- # set +x 00:07:37.942 ************************************ 00:07:37.942 START TEST env_dpdk_post_init 00:07:37.942 ************************************ 00:07:37.942 11:21:43 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:37.942 EAL: Detected CPU lcores: 10 00:07:37.942 EAL: Detected NUMA nodes: 1 00:07:37.942 EAL: Detected shared linkage of DPDK 00:07:37.942 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:37.942 EAL: Selected IOVA mode 'PA' 00:07:37.942 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:38.201 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:38.201 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:38.201 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:07:38.201 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:07:38.201 Starting DPDK initialization... 00:07:38.201 Starting SPDK post initialization... 00:07:38.201 SPDK NVMe probe 00:07:38.201 Attaching to 0000:00:10.0 00:07:38.201 Attaching to 0000:00:11.0 00:07:38.201 Attaching to 0000:00:12.0 00:07:38.201 Attaching to 0000:00:13.0 00:07:38.201 Attached to 0000:00:10.0 00:07:38.201 Attached to 0000:00:11.0 00:07:38.201 Attached to 0000:00:13.0 00:07:38.201 Attached to 0000:00:12.0 00:07:38.201 Cleaning up... 00:07:38.201 00:07:38.201 real 0m0.340s 00:07:38.201 user 0m0.117s 00:07:38.201 sys 0m0.124s 00:07:38.201 ************************************ 00:07:38.201 END TEST env_dpdk_post_init 00:07:38.201 ************************************ 00:07:38.201 11:21:43 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.201 11:21:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:38.201 11:21:43 env -- env/env.sh@26 -- # uname 00:07:38.201 11:21:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:38.201 11:21:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:38.201 11:21:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.201 11:21:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.201 11:21:43 env -- common/autotest_common.sh@10 -- # set +x 00:07:38.201 ************************************ 00:07:38.201 START TEST env_mem_callbacks 00:07:38.201 ************************************ 00:07:38.201 11:21:43 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:38.201 EAL: Detected CPU lcores: 10 00:07:38.201 EAL: Detected NUMA nodes: 1 00:07:38.201 EAL: Detected shared linkage of DPDK 00:07:38.201 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:38.201 EAL: Selected IOVA mode 'PA' 00:07:38.460 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:38.460 00:07:38.460 00:07:38.460 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.460 http://cunit.sourceforge.net/ 00:07:38.460 00:07:38.460 00:07:38.460 Suite: memory 00:07:38.460 Test: test ... 00:07:38.460 register 0x200000200000 2097152 00:07:38.460 malloc 3145728 00:07:38.460 register 0x200000400000 4194304 00:07:38.460 buf 0x2000004fffc0 len 3145728 PASSED 00:07:38.460 malloc 64 00:07:38.460 buf 0x2000004ffec0 len 64 PASSED 00:07:38.460 malloc 4194304 00:07:38.460 register 0x200000800000 6291456 00:07:38.460 buf 0x2000009fffc0 len 4194304 PASSED 00:07:38.460 free 0x2000004fffc0 3145728 00:07:38.460 free 0x2000004ffec0 64 00:07:38.460 unregister 0x200000400000 4194304 PASSED 00:07:38.460 free 0x2000009fffc0 4194304 00:07:38.460 unregister 0x200000800000 6291456 PASSED 00:07:38.460 malloc 8388608 00:07:38.460 register 0x200000400000 10485760 00:07:38.460 buf 0x2000005fffc0 len 8388608 PASSED 00:07:38.460 free 0x2000005fffc0 8388608 00:07:38.460 unregister 0x200000400000 10485760 PASSED 00:07:38.460 passed 00:07:38.460 00:07:38.460 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.460 suites 1 1 n/a 0 0 00:07:38.460 tests 1 1 1 0 0 00:07:38.460 asserts 15 15 15 0 n/a 00:07:38.460 00:07:38.460 Elapsed time = 0.113 seconds 00:07:38.460 00:07:38.460 real 0m0.326s 00:07:38.460 user 0m0.146s 00:07:38.460 sys 0m0.078s 00:07:38.460 11:21:44 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.460 11:21:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:38.460 ************************************ 00:07:38.460 END TEST env_mem_callbacks 00:07:38.460 ************************************ 00:07:38.460 ************************************ 00:07:38.460 END TEST env 00:07:38.460 ************************************ 00:07:38.460 00:07:38.460 real 0m11.503s 00:07:38.460 user 0m9.555s 00:07:38.460 sys 0m1.572s 00:07:38.460 11:21:44 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.460 11:21:44 env -- common/autotest_common.sh@10 -- # set +x 00:07:38.720 11:21:44 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:38.720 11:21:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.720 11:21:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.720 11:21:44 -- common/autotest_common.sh@10 -- # set +x 00:07:38.720 ************************************ 00:07:38.720 START TEST rpc 00:07:38.720 ************************************ 00:07:38.720 11:21:44 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:38.720 * Looking for test storage... 00:07:38.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:38.720 11:21:44 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:38.720 11:21:44 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:38.720 11:21:44 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:38.720 11:21:44 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:38.720 11:21:44 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.720 11:21:44 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.720 11:21:44 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.720 11:21:44 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.720 11:21:44 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.720 11:21:44 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.720 11:21:44 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.720 11:21:44 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.720 11:21:44 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.720 11:21:44 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.720 11:21:44 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.720 11:21:44 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:38.720 11:21:44 rpc -- scripts/common.sh@345 -- # : 1 00:07:38.720 11:21:44 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.720 11:21:44 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.720 11:21:44 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:38.720 11:21:44 rpc -- scripts/common.sh@353 -- # local d=1 00:07:38.720 11:21:44 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.720 11:21:44 rpc -- scripts/common.sh@355 -- # echo 1 00:07:38.720 11:21:44 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.720 11:21:44 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:38.720 11:21:44 rpc -- scripts/common.sh@353 -- # local d=2 00:07:38.720 11:21:44 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.720 11:21:44 rpc -- scripts/common.sh@355 -- # echo 2 00:07:38.720 11:21:44 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.720 11:21:44 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.720 11:21:44 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.720 11:21:44 rpc -- scripts/common.sh@368 -- # return 0 00:07:38.720 11:21:44 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.720 11:21:44 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:38.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.720 --rc genhtml_branch_coverage=1 00:07:38.720 --rc genhtml_function_coverage=1 00:07:38.720 --rc genhtml_legend=1 00:07:38.720 --rc geninfo_all_blocks=1 00:07:38.720 --rc geninfo_unexecuted_blocks=1 00:07:38.720 00:07:38.720 ' 00:07:38.720 11:21:44 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:38.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.720 --rc genhtml_branch_coverage=1 00:07:38.720 --rc genhtml_function_coverage=1 00:07:38.720 --rc genhtml_legend=1 00:07:38.720 --rc geninfo_all_blocks=1 00:07:38.720 --rc geninfo_unexecuted_blocks=1 00:07:38.720 00:07:38.720 ' 00:07:38.720 11:21:44 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:38.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.720 --rc genhtml_branch_coverage=1 00:07:38.720 --rc genhtml_function_coverage=1 00:07:38.720 --rc genhtml_legend=1 00:07:38.720 --rc geninfo_all_blocks=1 00:07:38.720 --rc geninfo_unexecuted_blocks=1 00:07:38.720 00:07:38.720 ' 00:07:38.721 11:21:44 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:38.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.721 --rc genhtml_branch_coverage=1 00:07:38.721 --rc genhtml_function_coverage=1 00:07:38.721 --rc genhtml_legend=1 00:07:38.721 --rc geninfo_all_blocks=1 00:07:38.721 --rc geninfo_unexecuted_blocks=1 00:07:38.721 00:07:38.721 ' 00:07:38.721 11:21:44 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58110 00:07:38.721 11:21:44 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:38.721 11:21:44 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:38.721 11:21:44 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58110 00:07:38.721 11:21:44 rpc -- common/autotest_common.sh@835 -- # '[' -z 58110 ']' 00:07:38.721 11:21:44 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.721 11:21:44 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.721 11:21:44 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.721 11:21:44 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.721 11:21:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.979 [2024-11-20 11:21:44.565420] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:07:38.979 [2024-11-20 11:21:44.565575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58110 ] 00:07:39.238 [2024-11-20 11:21:44.750563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.238 [2024-11-20 11:21:44.924089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:39.238 [2024-11-20 11:21:44.924177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58110' to capture a snapshot of events at runtime. 00:07:39.238 [2024-11-20 11:21:44.924199] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.238 [2024-11-20 11:21:44.924239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.238 [2024-11-20 11:21:44.924256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58110 for offline analysis/debug. 00:07:39.238 [2024-11-20 11:21:44.926154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.175 11:21:45 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.175 11:21:45 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:40.175 11:21:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:40.175 11:21:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:40.175 11:21:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:40.175 11:21:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:40.175 11:21:45 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.175 11:21:45 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.175 11:21:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.175 ************************************ 00:07:40.175 START TEST rpc_integrity 00:07:40.175 ************************************ 00:07:40.175 11:21:45 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:40.175 11:21:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:40.175 11:21:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.175 11:21:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:40.175 11:21:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.175 11:21:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:40.175 11:21:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:40.434 11:21:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:40.434 11:21:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:40.434 11:21:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.434 11:21:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:40.434 11:21:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.434 11:21:45 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:40.434 11:21:45 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:40.434 11:21:45 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.434 11:21:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:40.434 11:21:45 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.434 11:21:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:40.434 { 00:07:40.434 "name": "Malloc0", 00:07:40.434 "aliases": [ 00:07:40.434 "7162361d-0de2-44f7-a421-f4706e386c88" 00:07:40.434 ], 00:07:40.434 "product_name": "Malloc disk", 00:07:40.434 "block_size": 512, 00:07:40.434 "num_blocks": 16384, 00:07:40.434 "uuid": "7162361d-0de2-44f7-a421-f4706e386c88", 00:07:40.434 "assigned_rate_limits": { 00:07:40.434 "rw_ios_per_sec": 0, 00:07:40.434 "rw_mbytes_per_sec": 0, 00:07:40.434 "r_mbytes_per_sec": 0, 00:07:40.434 "w_mbytes_per_sec": 0 00:07:40.434 }, 00:07:40.434 "claimed": false, 00:07:40.434 "zoned": false, 00:07:40.434 "supported_io_types": { 00:07:40.434 "read": true, 00:07:40.434 "write": true, 00:07:40.434 "unmap": true, 00:07:40.434 "flush": true, 00:07:40.434 "reset": true, 00:07:40.434 "nvme_admin": false, 00:07:40.434 "nvme_io": false, 00:07:40.435 "nvme_io_md": false, 00:07:40.435 "write_zeroes": true, 00:07:40.435 "zcopy": true, 00:07:40.435 "get_zone_info": false, 00:07:40.435 "zone_management": false, 00:07:40.435 "zone_append": false, 00:07:40.435 "compare": false, 00:07:40.435 "compare_and_write": false, 00:07:40.435 "abort": true, 00:07:40.435 "seek_hole": false, 00:07:40.435 "seek_data": false, 00:07:40.435 "copy": true, 00:07:40.435 "nvme_iov_md": false 00:07:40.435 }, 00:07:40.435 "memory_domains": [ 00:07:40.435 { 00:07:40.435 "dma_device_id": "system", 00:07:40.435 "dma_device_type": 1 00:07:40.435 }, 00:07:40.435 { 00:07:40.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.435 "dma_device_type": 2 00:07:40.435 } 00:07:40.435 ], 00:07:40.435 "driver_specific": {} 00:07:40.435 } 00:07:40.435 ]' 00:07:40.435 11:21:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:40.435 11:21:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:40.435 11:21:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:40.435 11:21:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.435 11:21:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:40.435 [2024-11-20 11:21:46.055275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:40.435 [2024-11-20 11:21:46.055367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.435 [2024-11-20 11:21:46.055410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:40.435 [2024-11-20 11:21:46.055428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.435 [2024-11-20 11:21:46.058468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.435 [2024-11-20 11:21:46.058542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:40.435 Passthru0 00:07:40.435 11:21:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.435 11:21:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:40.435 11:21:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.435 11:21:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:40.435 11:21:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.435 11:21:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:40.435 { 00:07:40.435 "name": "Malloc0", 00:07:40.435 "aliases": [ 00:07:40.435 "7162361d-0de2-44f7-a421-f4706e386c88" 00:07:40.435 ], 00:07:40.435 "product_name": "Malloc disk", 00:07:40.435 "block_size": 512, 00:07:40.435 "num_blocks": 16384, 00:07:40.435 "uuid": "7162361d-0de2-44f7-a421-f4706e386c88", 00:07:40.435 "assigned_rate_limits": { 00:07:40.435 "rw_ios_per_sec": 0, 00:07:40.435 "rw_mbytes_per_sec": 0, 00:07:40.435 "r_mbytes_per_sec": 0, 00:07:40.435 "w_mbytes_per_sec": 0 00:07:40.435 }, 00:07:40.435 "claimed": true, 00:07:40.435 "claim_type": "exclusive_write", 00:07:40.435 "zoned": false, 00:07:40.435 "supported_io_types": { 00:07:40.435 "read": true, 00:07:40.435 "write": true, 00:07:40.435 "unmap": true, 00:07:40.435 "flush": true, 00:07:40.435 "reset": true, 00:07:40.435 "nvme_admin": false, 00:07:40.435 "nvme_io": false, 00:07:40.435 "nvme_io_md": false, 00:07:40.435 "write_zeroes": true, 00:07:40.435 "zcopy": true, 00:07:40.435 "get_zone_info": false, 00:07:40.435 "zone_management": false, 00:07:40.435 "zone_append": false, 00:07:40.435 "compare": false, 00:07:40.435 "compare_and_write": false, 00:07:40.435 "abort": true, 00:07:40.435 "seek_hole": false, 00:07:40.435 "seek_data": false, 00:07:40.435 "copy": true, 00:07:40.435 "nvme_iov_md": false 00:07:40.435 }, 00:07:40.435 "memory_domains": [ 00:07:40.435 { 00:07:40.435 "dma_device_id": "system", 00:07:40.435 "dma_device_type": 1 00:07:40.435 }, 00:07:40.435 { 00:07:40.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.435 "dma_device_type": 2 00:07:40.435 } 00:07:40.435 ], 00:07:40.435 "driver_specific": {} 00:07:40.435 }, 00:07:40.435 { 00:07:40.435 "name": "Passthru0", 00:07:40.435 "aliases": [ 00:07:40.435 "d42fa41b-1e48-5e90-8174-b60d554c1951" 00:07:40.435 ], 00:07:40.435 "product_name": "passthru", 00:07:40.435 "block_size": 512, 00:07:40.435 "num_blocks": 16384, 00:07:40.435 "uuid": "d42fa41b-1e48-5e90-8174-b60d554c1951", 00:07:40.435 "assigned_rate_limits": { 00:07:40.435 "rw_ios_per_sec": 0, 00:07:40.435 "rw_mbytes_per_sec": 0, 00:07:40.435 "r_mbytes_per_sec": 0, 00:07:40.435 "w_mbytes_per_sec": 0 00:07:40.435 }, 00:07:40.435 "claimed": false, 00:07:40.435 "zoned": false, 00:07:40.435 "supported_io_types": { 00:07:40.435 "read": true, 00:07:40.435 "write": true, 00:07:40.435 "unmap": true, 00:07:40.435 "flush": true, 00:07:40.435 "reset": true, 00:07:40.435 "nvme_admin": false, 00:07:40.435 "nvme_io": false, 00:07:40.435 "nvme_io_md": false, 00:07:40.435 "write_zeroes": true, 00:07:40.435 "zcopy": true, 00:07:40.435 "get_zone_info": false, 00:07:40.435 "zone_management": false, 00:07:40.435 "zone_append": false, 00:07:40.435 "compare": false, 00:07:40.435 "compare_and_write": false, 00:07:40.435 "abort": true, 00:07:40.435 "seek_hole": false, 00:07:40.435 "seek_data": false, 00:07:40.435 "copy": true, 00:07:40.435 "nvme_iov_md": false 00:07:40.435 }, 00:07:40.435 "memory_domains": [ 00:07:40.435 { 00:07:40.435 "dma_device_id": "system", 00:07:40.435 "dma_device_type": 1 00:07:40.435 }, 00:07:40.435 { 00:07:40.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.435 "dma_device_type": 2 00:07:40.435 } 00:07:40.435 ], 00:07:40.435 "driver_specific": { 00:07:40.435 "passthru": { 00:07:40.435 "name": "Passthru0", 00:07:40.435 "base_bdev_name": "Malloc0" 00:07:40.435 } 00:07:40.435 } 00:07:40.435 } 00:07:40.435 ]' 00:07:40.435 11:21:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:40.435 11:21:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:40.435 11:21:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:40.435 11:21:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.435 11:21:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:40.435 11:21:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.435 11:21:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:40.435 11:21:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.435 11:21:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:40.435 11:21:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.435 11:21:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:40.435 11:21:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.435 11:21:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:40.435 11:21:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.435 11:21:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:40.693 11:21:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:40.693 11:21:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:40.693 00:07:40.693 real 0m0.380s 00:07:40.693 user 0m0.213s 00:07:40.693 sys 0m0.056s 00:07:40.693 11:21:46 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.693 ************************************ 00:07:40.693 END TEST rpc_integrity 00:07:40.693 ************************************ 00:07:40.693 11:21:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:40.693 11:21:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:40.693 11:21:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.693 11:21:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.693 11:21:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.693 ************************************ 00:07:40.693 START TEST rpc_plugins 00:07:40.693 ************************************ 00:07:40.693 11:21:46 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:40.693 11:21:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:40.693 11:21:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.693 11:21:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:40.693 11:21:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.693 11:21:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:40.693 11:21:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:40.693 11:21:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.693 11:21:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:40.693 11:21:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.693 11:21:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:40.693 { 00:07:40.693 "name": "Malloc1", 00:07:40.693 "aliases": [ 00:07:40.693 "52d6b45e-a566-4cbb-99b0-51ee9618da5d" 00:07:40.693 ], 00:07:40.693 "product_name": "Malloc disk", 00:07:40.693 "block_size": 4096, 00:07:40.693 "num_blocks": 256, 00:07:40.693 "uuid": "52d6b45e-a566-4cbb-99b0-51ee9618da5d", 00:07:40.693 "assigned_rate_limits": { 00:07:40.693 "rw_ios_per_sec": 0, 00:07:40.693 "rw_mbytes_per_sec": 0, 00:07:40.693 "r_mbytes_per_sec": 0, 00:07:40.693 "w_mbytes_per_sec": 0 00:07:40.693 }, 00:07:40.693 "claimed": false, 00:07:40.693 "zoned": false, 00:07:40.693 "supported_io_types": { 00:07:40.693 "read": true, 00:07:40.693 "write": true, 00:07:40.693 "unmap": true, 00:07:40.693 "flush": true, 00:07:40.693 "reset": true, 00:07:40.693 "nvme_admin": false, 00:07:40.693 "nvme_io": false, 00:07:40.693 "nvme_io_md": false, 00:07:40.693 "write_zeroes": true, 00:07:40.693 "zcopy": true, 00:07:40.693 "get_zone_info": false, 00:07:40.693 "zone_management": false, 00:07:40.693 "zone_append": false, 00:07:40.693 "compare": false, 00:07:40.693 "compare_and_write": false, 00:07:40.693 "abort": true, 00:07:40.693 "seek_hole": false, 00:07:40.693 "seek_data": false, 00:07:40.693 "copy": true, 00:07:40.693 "nvme_iov_md": false 00:07:40.693 }, 00:07:40.693 "memory_domains": [ 00:07:40.693 { 00:07:40.693 "dma_device_id": "system", 00:07:40.693 "dma_device_type": 1 00:07:40.693 }, 00:07:40.693 { 00:07:40.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.693 "dma_device_type": 2 00:07:40.693 } 00:07:40.693 ], 00:07:40.693 "driver_specific": {} 00:07:40.693 } 00:07:40.693 ]' 00:07:40.693 11:21:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:40.693 11:21:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:40.693 11:21:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:40.693 11:21:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.693 11:21:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:40.693 11:21:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.693 11:21:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:40.693 11:21:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.693 11:21:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:40.693 11:21:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.693 11:21:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:40.693 11:21:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:40.952 11:21:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:40.952 00:07:40.952 real 0m0.159s 00:07:40.952 user 0m0.093s 00:07:40.952 sys 0m0.023s 00:07:40.952 11:21:46 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.952 11:21:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:40.952 ************************************ 00:07:40.952 END TEST rpc_plugins 00:07:40.952 ************************************ 00:07:40.952 11:21:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:40.952 11:21:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.952 11:21:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.952 11:21:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.952 ************************************ 00:07:40.952 START TEST rpc_trace_cmd_test 00:07:40.952 ************************************ 00:07:40.952 11:21:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:40.952 11:21:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:40.952 11:21:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:40.952 11:21:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.952 11:21:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.952 11:21:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.952 11:21:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:40.952 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58110", 00:07:40.952 "tpoint_group_mask": "0x8", 00:07:40.952 "iscsi_conn": { 00:07:40.952 "mask": "0x2", 00:07:40.952 "tpoint_mask": "0x0" 00:07:40.952 }, 00:07:40.952 "scsi": { 00:07:40.952 "mask": "0x4", 00:07:40.952 "tpoint_mask": "0x0" 00:07:40.952 }, 00:07:40.952 "bdev": { 00:07:40.952 "mask": "0x8", 00:07:40.952 "tpoint_mask": "0xffffffffffffffff" 00:07:40.952 }, 00:07:40.952 "nvmf_rdma": { 00:07:40.952 "mask": "0x10", 00:07:40.952 "tpoint_mask": "0x0" 00:07:40.952 }, 00:07:40.952 "nvmf_tcp": { 00:07:40.952 "mask": "0x20", 00:07:40.952 "tpoint_mask": "0x0" 00:07:40.952 }, 00:07:40.952 "ftl": { 00:07:40.952 "mask": "0x40", 00:07:40.952 "tpoint_mask": "0x0" 00:07:40.952 }, 00:07:40.952 "blobfs": { 00:07:40.952 "mask": "0x80", 00:07:40.952 "tpoint_mask": "0x0" 00:07:40.952 }, 00:07:40.952 "dsa": { 00:07:40.952 "mask": "0x200", 00:07:40.952 "tpoint_mask": "0x0" 00:07:40.952 }, 00:07:40.952 "thread": { 00:07:40.952 "mask": "0x400", 00:07:40.952 "tpoint_mask": "0x0" 00:07:40.952 }, 00:07:40.952 "nvme_pcie": { 00:07:40.952 "mask": "0x800", 00:07:40.952 "tpoint_mask": "0x0" 00:07:40.952 }, 00:07:40.952 "iaa": { 00:07:40.952 "mask": "0x1000", 00:07:40.952 "tpoint_mask": "0x0" 00:07:40.952 }, 00:07:40.952 "nvme_tcp": { 00:07:40.952 "mask": "0x2000", 00:07:40.952 "tpoint_mask": "0x0" 00:07:40.952 }, 00:07:40.952 "bdev_nvme": { 00:07:40.952 "mask": "0x4000", 00:07:40.952 "tpoint_mask": "0x0" 00:07:40.952 }, 00:07:40.952 "sock": { 00:07:40.952 "mask": "0x8000", 00:07:40.952 "tpoint_mask": "0x0" 00:07:40.952 }, 00:07:40.952 "blob": { 00:07:40.952 "mask": "0x10000", 00:07:40.952 "tpoint_mask": "0x0" 00:07:40.952 }, 00:07:40.952 "bdev_raid": { 00:07:40.952 "mask": "0x20000", 00:07:40.952 "tpoint_mask": "0x0" 00:07:40.952 }, 00:07:40.952 "scheduler": { 00:07:40.952 "mask": "0x40000", 00:07:40.952 "tpoint_mask": "0x0" 00:07:40.952 } 00:07:40.952 }' 00:07:40.952 11:21:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:40.952 11:21:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:40.952 11:21:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:40.952 11:21:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:40.952 11:21:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:40.952 11:21:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:40.952 11:21:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:41.210 11:21:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:41.210 11:21:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:41.210 11:21:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:41.210 00:07:41.210 real 0m0.255s 00:07:41.210 user 0m0.214s 00:07:41.210 sys 0m0.031s 00:07:41.210 11:21:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.210 ************************************ 00:07:41.210 END TEST rpc_trace_cmd_test 00:07:41.210 11:21:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.210 ************************************ 00:07:41.210 11:21:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:41.210 11:21:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:41.210 11:21:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:41.210 11:21:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.210 11:21:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.210 11:21:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.210 ************************************ 00:07:41.210 START TEST rpc_daemon_integrity 00:07:41.210 ************************************ 00:07:41.210 11:21:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:41.210 11:21:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:41.210 11:21:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.210 11:21:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:41.210 11:21:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.211 11:21:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:41.211 11:21:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:41.211 11:21:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:41.211 11:21:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:41.211 11:21:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.211 11:21:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:41.211 11:21:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.211 11:21:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:41.211 11:21:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:41.211 11:21:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.211 11:21:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:41.211 11:21:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.211 11:21:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:41.211 { 00:07:41.211 "name": "Malloc2", 00:07:41.211 "aliases": [ 00:07:41.211 "c254f6e8-0c75-4502-8c7e-f57d06f06214" 00:07:41.211 ], 00:07:41.211 "product_name": "Malloc disk", 00:07:41.211 "block_size": 512, 00:07:41.211 "num_blocks": 16384, 00:07:41.211 "uuid": "c254f6e8-0c75-4502-8c7e-f57d06f06214", 00:07:41.211 "assigned_rate_limits": { 00:07:41.211 "rw_ios_per_sec": 0, 00:07:41.211 "rw_mbytes_per_sec": 0, 00:07:41.211 "r_mbytes_per_sec": 0, 00:07:41.211 "w_mbytes_per_sec": 0 00:07:41.211 }, 00:07:41.211 "claimed": false, 00:07:41.211 "zoned": false, 00:07:41.211 "supported_io_types": { 00:07:41.211 "read": true, 00:07:41.211 "write": true, 00:07:41.211 "unmap": true, 00:07:41.211 "flush": true, 00:07:41.211 "reset": true, 00:07:41.211 "nvme_admin": false, 00:07:41.211 "nvme_io": false, 00:07:41.211 "nvme_io_md": false, 00:07:41.211 "write_zeroes": true, 00:07:41.211 "zcopy": true, 00:07:41.211 "get_zone_info": false, 00:07:41.211 "zone_management": false, 00:07:41.211 "zone_append": false, 00:07:41.211 "compare": false, 00:07:41.211 "compare_and_write": false, 00:07:41.211 "abort": true, 00:07:41.211 "seek_hole": false, 00:07:41.211 "seek_data": false, 00:07:41.211 "copy": true, 00:07:41.211 "nvme_iov_md": false 00:07:41.211 }, 00:07:41.211 "memory_domains": [ 00:07:41.211 { 00:07:41.211 "dma_device_id": "system", 00:07:41.211 "dma_device_type": 1 00:07:41.211 }, 00:07:41.211 { 00:07:41.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.211 "dma_device_type": 2 00:07:41.211 } 00:07:41.211 ], 00:07:41.211 "driver_specific": {} 00:07:41.211 } 00:07:41.211 ]' 00:07:41.211 11:21:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:41.470 11:21:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:41.470 11:21:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:41.470 11:21:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.470 11:21:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:41.470 [2024-11-20 11:21:46.980157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:41.470 [2024-11-20 11:21:46.980255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.470 [2024-11-20 11:21:46.980294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:41.470 [2024-11-20 11:21:46.980317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.470 [2024-11-20 11:21:46.984070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.470 [2024-11-20 11:21:46.984136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:41.470 Passthru0 00:07:41.470 11:21:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.470 11:21:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:41.470 11:21:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.470 11:21:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:41.470 11:21:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.470 11:21:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:41.470 { 00:07:41.470 "name": "Malloc2", 00:07:41.470 "aliases": [ 00:07:41.470 "c254f6e8-0c75-4502-8c7e-f57d06f06214" 00:07:41.470 ], 00:07:41.470 "product_name": "Malloc disk", 00:07:41.470 "block_size": 512, 00:07:41.470 "num_blocks": 16384, 00:07:41.470 "uuid": "c254f6e8-0c75-4502-8c7e-f57d06f06214", 00:07:41.470 "assigned_rate_limits": { 00:07:41.470 "rw_ios_per_sec": 0, 00:07:41.470 "rw_mbytes_per_sec": 0, 00:07:41.470 "r_mbytes_per_sec": 0, 00:07:41.470 "w_mbytes_per_sec": 0 00:07:41.470 }, 00:07:41.470 "claimed": true, 00:07:41.470 "claim_type": "exclusive_write", 00:07:41.470 "zoned": false, 00:07:41.470 "supported_io_types": { 00:07:41.470 "read": true, 00:07:41.470 "write": true, 00:07:41.470 "unmap": true, 00:07:41.470 "flush": true, 00:07:41.470 "reset": true, 00:07:41.470 "nvme_admin": false, 00:07:41.470 "nvme_io": false, 00:07:41.470 "nvme_io_md": false, 00:07:41.470 "write_zeroes": true, 00:07:41.470 "zcopy": true, 00:07:41.470 "get_zone_info": false, 00:07:41.470 "zone_management": false, 00:07:41.470 "zone_append": false, 00:07:41.470 "compare": false, 00:07:41.470 "compare_and_write": false, 00:07:41.470 "abort": true, 00:07:41.470 "seek_hole": false, 00:07:41.470 "seek_data": false, 00:07:41.470 "copy": true, 00:07:41.470 "nvme_iov_md": false 00:07:41.470 }, 00:07:41.470 "memory_domains": [ 00:07:41.470 { 00:07:41.470 "dma_device_id": "system", 00:07:41.470 "dma_device_type": 1 00:07:41.470 }, 00:07:41.470 { 00:07:41.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.470 "dma_device_type": 2 00:07:41.470 } 00:07:41.470 ], 00:07:41.470 "driver_specific": {} 00:07:41.470 }, 00:07:41.470 { 00:07:41.470 "name": "Passthru0", 00:07:41.470 "aliases": [ 00:07:41.470 "c9a2b6a6-b5db-5533-876e-f679b5178daf" 00:07:41.470 ], 00:07:41.470 "product_name": "passthru", 00:07:41.470 "block_size": 512, 00:07:41.470 "num_blocks": 16384, 00:07:41.470 "uuid": "c9a2b6a6-b5db-5533-876e-f679b5178daf", 00:07:41.470 "assigned_rate_limits": { 00:07:41.470 "rw_ios_per_sec": 0, 00:07:41.470 "rw_mbytes_per_sec": 0, 00:07:41.470 "r_mbytes_per_sec": 0, 00:07:41.470 "w_mbytes_per_sec": 0 00:07:41.470 }, 00:07:41.470 "claimed": false, 00:07:41.470 "zoned": false, 00:07:41.470 "supported_io_types": { 00:07:41.470 "read": true, 00:07:41.470 "write": true, 00:07:41.470 "unmap": true, 00:07:41.470 "flush": true, 00:07:41.470 "reset": true, 00:07:41.470 "nvme_admin": false, 00:07:41.470 "nvme_io": false, 00:07:41.470 "nvme_io_md": false, 00:07:41.470 "write_zeroes": true, 00:07:41.470 "zcopy": true, 00:07:41.470 "get_zone_info": false, 00:07:41.470 "zone_management": false, 00:07:41.470 "zone_append": false, 00:07:41.470 "compare": false, 00:07:41.470 "compare_and_write": false, 00:07:41.470 "abort": true, 00:07:41.470 "seek_hole": false, 00:07:41.470 "seek_data": false, 00:07:41.470 "copy": true, 00:07:41.470 "nvme_iov_md": false 00:07:41.470 }, 00:07:41.470 "memory_domains": [ 00:07:41.470 { 00:07:41.470 "dma_device_id": "system", 00:07:41.470 "dma_device_type": 1 00:07:41.470 }, 00:07:41.470 { 00:07:41.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.470 "dma_device_type": 2 00:07:41.470 } 00:07:41.470 ], 00:07:41.470 "driver_specific": { 00:07:41.470 "passthru": { 00:07:41.470 "name": "Passthru0", 00:07:41.470 "base_bdev_name": "Malloc2" 00:07:41.470 } 00:07:41.470 } 00:07:41.470 } 00:07:41.470 ]' 00:07:41.470 11:21:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:41.470 11:21:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:41.470 11:21:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:41.470 11:21:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.470 11:21:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:41.470 11:21:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.470 11:21:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:41.471 11:21:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.471 11:21:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:41.471 11:21:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.471 11:21:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:41.471 11:21:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.471 11:21:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:41.471 11:21:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.471 11:21:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:41.471 11:21:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:41.471 11:21:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:41.471 00:07:41.471 real 0m0.327s 00:07:41.471 user 0m0.187s 00:07:41.471 sys 0m0.043s 00:07:41.471 11:21:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.471 11:21:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:41.471 ************************************ 00:07:41.471 END TEST rpc_daemon_integrity 00:07:41.471 ************************************ 00:07:41.471 11:21:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:41.471 11:21:47 rpc -- rpc/rpc.sh@84 -- # killprocess 58110 00:07:41.471 11:21:47 rpc -- common/autotest_common.sh@954 -- # '[' -z 58110 ']' 00:07:41.471 11:21:47 rpc -- common/autotest_common.sh@958 -- # kill -0 58110 00:07:41.471 11:21:47 rpc -- common/autotest_common.sh@959 -- # uname 00:07:41.471 11:21:47 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.471 11:21:47 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58110 00:07:41.729 11:21:47 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.729 killing process with pid 58110 00:07:41.729 11:21:47 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.729 11:21:47 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58110' 00:07:41.729 11:21:47 rpc -- common/autotest_common.sh@973 -- # kill 58110 00:07:41.729 11:21:47 rpc -- common/autotest_common.sh@978 -- # wait 58110 00:07:45.008 00:07:45.008 real 0m5.831s 00:07:45.008 user 0m6.394s 00:07:45.008 sys 0m0.887s 00:07:45.008 11:21:50 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.008 11:21:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.008 ************************************ 00:07:45.008 END TEST rpc 00:07:45.008 ************************************ 00:07:45.008 11:21:50 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:45.008 11:21:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.008 11:21:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.008 11:21:50 -- common/autotest_common.sh@10 -- # set +x 00:07:45.008 ************************************ 00:07:45.008 START TEST skip_rpc 00:07:45.008 ************************************ 00:07:45.008 11:21:50 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:45.008 * Looking for test storage... 00:07:45.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:45.008 11:21:50 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:45.008 11:21:50 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:45.008 11:21:50 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:45.008 11:21:50 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.008 11:21:50 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:45.008 11:21:50 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.008 11:21:50 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:45.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.009 --rc genhtml_branch_coverage=1 00:07:45.009 --rc genhtml_function_coverage=1 00:07:45.009 --rc genhtml_legend=1 00:07:45.009 --rc geninfo_all_blocks=1 00:07:45.009 --rc geninfo_unexecuted_blocks=1 00:07:45.009 00:07:45.009 ' 00:07:45.009 11:21:50 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:45.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.009 --rc genhtml_branch_coverage=1 00:07:45.009 --rc genhtml_function_coverage=1 00:07:45.009 --rc genhtml_legend=1 00:07:45.009 --rc geninfo_all_blocks=1 00:07:45.009 --rc geninfo_unexecuted_blocks=1 00:07:45.009 00:07:45.009 ' 00:07:45.009 11:21:50 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:45.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.009 --rc genhtml_branch_coverage=1 00:07:45.009 --rc genhtml_function_coverage=1 00:07:45.009 --rc genhtml_legend=1 00:07:45.009 --rc geninfo_all_blocks=1 00:07:45.009 --rc geninfo_unexecuted_blocks=1 00:07:45.009 00:07:45.009 ' 00:07:45.009 11:21:50 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:45.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.009 --rc genhtml_branch_coverage=1 00:07:45.009 --rc genhtml_function_coverage=1 00:07:45.009 --rc genhtml_legend=1 00:07:45.009 --rc geninfo_all_blocks=1 00:07:45.009 --rc geninfo_unexecuted_blocks=1 00:07:45.009 00:07:45.009 ' 00:07:45.009 11:21:50 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:45.009 11:21:50 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:45.009 11:21:50 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:45.009 11:21:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.009 11:21:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.009 11:21:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.009 ************************************ 00:07:45.009 START TEST skip_rpc 00:07:45.009 ************************************ 00:07:45.009 11:21:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:45.009 11:21:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58345 00:07:45.009 11:21:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:45.009 11:21:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:45.009 11:21:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:45.009 [2024-11-20 11:21:50.460016] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:07:45.009 [2024-11-20 11:21:50.460188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58345 ] 00:07:45.009 [2024-11-20 11:21:50.657398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.267 [2024-11-20 11:21:50.832321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58345 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58345 ']' 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58345 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58345 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.535 killing process with pid 58345 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58345' 00:07:50.535 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58345 00:07:50.536 11:21:55 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58345 00:07:52.439 00:07:52.439 real 0m7.750s 00:07:52.439 user 0m7.212s 00:07:52.439 sys 0m0.440s 00:07:52.439 11:21:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.439 ************************************ 00:07:52.439 END TEST skip_rpc 00:07:52.439 ************************************ 00:07:52.439 11:21:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.439 11:21:58 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:52.439 11:21:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.439 11:21:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.439 11:21:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.439 ************************************ 00:07:52.439 START TEST skip_rpc_with_json 00:07:52.439 ************************************ 00:07:52.439 11:21:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:52.439 11:21:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:52.439 11:21:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58455 00:07:52.439 11:21:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:52.439 11:21:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58455 00:07:52.439 11:21:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58455 ']' 00:07:52.439 11:21:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:52.439 11:21:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.439 11:21:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.439 11:21:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.439 11:21:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.439 11:21:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:52.698 [2024-11-20 11:21:58.289275] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:07:52.698 [2024-11-20 11:21:58.289456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58455 ] 00:07:52.957 [2024-11-20 11:21:58.483572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.957 [2024-11-20 11:21:58.611721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.889 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.889 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:53.889 11:21:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:53.889 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.889 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:53.889 [2024-11-20 11:21:59.620914] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:53.889 request: 00:07:53.889 { 00:07:53.889 "trtype": "tcp", 00:07:53.889 "method": "nvmf_get_transports", 00:07:53.889 "req_id": 1 00:07:53.889 } 00:07:53.889 Got JSON-RPC error response 00:07:53.889 response: 00:07:53.889 { 00:07:53.889 "code": -19, 00:07:53.889 "message": "No such device" 00:07:53.889 } 00:07:53.889 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:53.889 11:21:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:53.889 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.889 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:53.889 [2024-11-20 11:21:59.629087] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.889 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.889 11:21:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:53.889 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.889 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:54.148 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.148 11:21:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:54.148 { 00:07:54.148 "subsystems": [ 00:07:54.148 { 00:07:54.148 "subsystem": "fsdev", 00:07:54.148 "config": [ 00:07:54.148 { 00:07:54.148 "method": "fsdev_set_opts", 00:07:54.148 "params": { 00:07:54.148 "fsdev_io_pool_size": 65535, 00:07:54.148 "fsdev_io_cache_size": 256 00:07:54.148 } 00:07:54.148 } 00:07:54.148 ] 00:07:54.148 }, 00:07:54.148 { 00:07:54.148 "subsystem": "keyring", 00:07:54.148 "config": [] 00:07:54.148 }, 00:07:54.148 { 00:07:54.148 "subsystem": "iobuf", 00:07:54.148 "config": [ 00:07:54.148 { 00:07:54.148 "method": "iobuf_set_options", 00:07:54.148 "params": { 00:07:54.148 "small_pool_count": 8192, 00:07:54.148 "large_pool_count": 1024, 00:07:54.148 "small_bufsize": 8192, 00:07:54.148 "large_bufsize": 135168, 00:07:54.148 "enable_numa": false 00:07:54.148 } 00:07:54.148 } 00:07:54.148 ] 00:07:54.148 }, 00:07:54.148 { 00:07:54.148 "subsystem": "sock", 00:07:54.148 "config": [ 00:07:54.148 { 00:07:54.148 "method": "sock_set_default_impl", 00:07:54.148 "params": { 00:07:54.148 "impl_name": "posix" 00:07:54.148 } 00:07:54.148 }, 00:07:54.148 { 00:07:54.148 "method": "sock_impl_set_options", 00:07:54.148 "params": { 00:07:54.148 "impl_name": "ssl", 00:07:54.148 "recv_buf_size": 4096, 00:07:54.148 "send_buf_size": 4096, 00:07:54.148 "enable_recv_pipe": true, 00:07:54.148 "enable_quickack": false, 00:07:54.148 "enable_placement_id": 0, 00:07:54.148 "enable_zerocopy_send_server": true, 00:07:54.148 "enable_zerocopy_send_client": false, 00:07:54.148 "zerocopy_threshold": 0, 00:07:54.148 "tls_version": 0, 00:07:54.148 "enable_ktls": false 00:07:54.148 } 00:07:54.148 }, 00:07:54.148 { 00:07:54.148 "method": "sock_impl_set_options", 00:07:54.148 "params": { 00:07:54.148 "impl_name": "posix", 00:07:54.148 "recv_buf_size": 2097152, 00:07:54.148 "send_buf_size": 2097152, 00:07:54.148 "enable_recv_pipe": true, 00:07:54.148 "enable_quickack": false, 00:07:54.148 "enable_placement_id": 0, 00:07:54.148 "enable_zerocopy_send_server": true, 00:07:54.148 "enable_zerocopy_send_client": false, 00:07:54.148 "zerocopy_threshold": 0, 00:07:54.148 "tls_version": 0, 00:07:54.148 "enable_ktls": false 00:07:54.148 } 00:07:54.148 } 00:07:54.148 ] 00:07:54.148 }, 00:07:54.148 { 00:07:54.148 "subsystem": "vmd", 00:07:54.148 "config": [] 00:07:54.148 }, 00:07:54.148 { 00:07:54.148 "subsystem": "accel", 00:07:54.148 "config": [ 00:07:54.148 { 00:07:54.148 "method": "accel_set_options", 00:07:54.148 "params": { 00:07:54.148 "small_cache_size": 128, 00:07:54.148 "large_cache_size": 16, 00:07:54.148 "task_count": 2048, 00:07:54.148 "sequence_count": 2048, 00:07:54.148 "buf_count": 2048 00:07:54.148 } 00:07:54.148 } 00:07:54.148 ] 00:07:54.148 }, 00:07:54.148 { 00:07:54.148 "subsystem": "bdev", 00:07:54.148 "config": [ 00:07:54.148 { 00:07:54.148 "method": "bdev_set_options", 00:07:54.148 "params": { 00:07:54.148 "bdev_io_pool_size": 65535, 00:07:54.148 "bdev_io_cache_size": 256, 00:07:54.148 "bdev_auto_examine": true, 00:07:54.148 "iobuf_small_cache_size": 128, 00:07:54.148 "iobuf_large_cache_size": 16 00:07:54.148 } 00:07:54.148 }, 00:07:54.148 { 00:07:54.148 "method": "bdev_raid_set_options", 00:07:54.148 "params": { 00:07:54.148 "process_window_size_kb": 1024, 00:07:54.148 "process_max_bandwidth_mb_sec": 0 00:07:54.148 } 00:07:54.148 }, 00:07:54.148 { 00:07:54.148 "method": "bdev_iscsi_set_options", 00:07:54.148 "params": { 00:07:54.148 "timeout_sec": 30 00:07:54.148 } 00:07:54.148 }, 00:07:54.148 { 00:07:54.148 "method": "bdev_nvme_set_options", 00:07:54.148 "params": { 00:07:54.148 "action_on_timeout": "none", 00:07:54.148 "timeout_us": 0, 00:07:54.148 "timeout_admin_us": 0, 00:07:54.148 "keep_alive_timeout_ms": 10000, 00:07:54.148 "arbitration_burst": 0, 00:07:54.148 "low_priority_weight": 0, 00:07:54.148 "medium_priority_weight": 0, 00:07:54.148 "high_priority_weight": 0, 00:07:54.148 "nvme_adminq_poll_period_us": 10000, 00:07:54.148 "nvme_ioq_poll_period_us": 0, 00:07:54.148 "io_queue_requests": 0, 00:07:54.148 "delay_cmd_submit": true, 00:07:54.148 "transport_retry_count": 4, 00:07:54.148 "bdev_retry_count": 3, 00:07:54.148 "transport_ack_timeout": 0, 00:07:54.148 "ctrlr_loss_timeout_sec": 0, 00:07:54.148 "reconnect_delay_sec": 0, 00:07:54.148 "fast_io_fail_timeout_sec": 0, 00:07:54.148 "disable_auto_failback": false, 00:07:54.148 "generate_uuids": false, 00:07:54.148 "transport_tos": 0, 00:07:54.148 "nvme_error_stat": false, 00:07:54.148 "rdma_srq_size": 0, 00:07:54.148 "io_path_stat": false, 00:07:54.148 "allow_accel_sequence": false, 00:07:54.148 "rdma_max_cq_size": 0, 00:07:54.148 "rdma_cm_event_timeout_ms": 0, 00:07:54.148 "dhchap_digests": [ 00:07:54.148 "sha256", 00:07:54.148 "sha384", 00:07:54.148 "sha512" 00:07:54.148 ], 00:07:54.148 "dhchap_dhgroups": [ 00:07:54.148 "null", 00:07:54.148 "ffdhe2048", 00:07:54.148 "ffdhe3072", 00:07:54.148 "ffdhe4096", 00:07:54.148 "ffdhe6144", 00:07:54.148 "ffdhe8192" 00:07:54.148 ] 00:07:54.148 } 00:07:54.148 }, 00:07:54.148 { 00:07:54.148 "method": "bdev_nvme_set_hotplug", 00:07:54.148 "params": { 00:07:54.148 "period_us": 100000, 00:07:54.148 "enable": false 00:07:54.148 } 00:07:54.148 }, 00:07:54.148 { 00:07:54.148 "method": "bdev_wait_for_examine" 00:07:54.148 } 00:07:54.148 ] 00:07:54.148 }, 00:07:54.148 { 00:07:54.148 "subsystem": "scsi", 00:07:54.148 "config": null 00:07:54.148 }, 00:07:54.148 { 00:07:54.148 "subsystem": "scheduler", 00:07:54.148 "config": [ 00:07:54.148 { 00:07:54.148 "method": "framework_set_scheduler", 00:07:54.149 "params": { 00:07:54.149 "name": "static" 00:07:54.149 } 00:07:54.149 } 00:07:54.149 ] 00:07:54.149 }, 00:07:54.149 { 00:07:54.149 "subsystem": "vhost_scsi", 00:07:54.149 "config": [] 00:07:54.149 }, 00:07:54.149 { 00:07:54.149 "subsystem": "vhost_blk", 00:07:54.149 "config": [] 00:07:54.149 }, 00:07:54.149 { 00:07:54.149 "subsystem": "ublk", 00:07:54.149 "config": [] 00:07:54.149 }, 00:07:54.149 { 00:07:54.149 "subsystem": "nbd", 00:07:54.149 "config": [] 00:07:54.149 }, 00:07:54.149 { 00:07:54.149 "subsystem": "nvmf", 00:07:54.149 "config": [ 00:07:54.149 { 00:07:54.149 "method": "nvmf_set_config", 00:07:54.149 "params": { 00:07:54.149 "discovery_filter": "match_any", 00:07:54.149 "admin_cmd_passthru": { 00:07:54.149 "identify_ctrlr": false 00:07:54.149 }, 00:07:54.149 "dhchap_digests": [ 00:07:54.149 "sha256", 00:07:54.149 "sha384", 00:07:54.149 "sha512" 00:07:54.149 ], 00:07:54.149 "dhchap_dhgroups": [ 00:07:54.149 "null", 00:07:54.149 "ffdhe2048", 00:07:54.149 "ffdhe3072", 00:07:54.149 "ffdhe4096", 00:07:54.149 "ffdhe6144", 00:07:54.149 "ffdhe8192" 00:07:54.149 ] 00:07:54.149 } 00:07:54.149 }, 00:07:54.149 { 00:07:54.149 "method": "nvmf_set_max_subsystems", 00:07:54.149 "params": { 00:07:54.149 "max_subsystems": 1024 00:07:54.149 } 00:07:54.149 }, 00:07:54.149 { 00:07:54.149 "method": "nvmf_set_crdt", 00:07:54.149 "params": { 00:07:54.149 "crdt1": 0, 00:07:54.149 "crdt2": 0, 00:07:54.149 "crdt3": 0 00:07:54.149 } 00:07:54.149 }, 00:07:54.149 { 00:07:54.149 "method": "nvmf_create_transport", 00:07:54.149 "params": { 00:07:54.149 "trtype": "TCP", 00:07:54.149 "max_queue_depth": 128, 00:07:54.149 "max_io_qpairs_per_ctrlr": 127, 00:07:54.149 "in_capsule_data_size": 4096, 00:07:54.149 "max_io_size": 131072, 00:07:54.149 "io_unit_size": 131072, 00:07:54.149 "max_aq_depth": 128, 00:07:54.149 "num_shared_buffers": 511, 00:07:54.149 "buf_cache_size": 4294967295, 00:07:54.149 "dif_insert_or_strip": false, 00:07:54.149 "zcopy": false, 00:07:54.149 "c2h_success": true, 00:07:54.149 "sock_priority": 0, 00:07:54.149 "abort_timeout_sec": 1, 00:07:54.149 "ack_timeout": 0, 00:07:54.149 "data_wr_pool_size": 0 00:07:54.149 } 00:07:54.149 } 00:07:54.149 ] 00:07:54.149 }, 00:07:54.149 { 00:07:54.149 "subsystem": "iscsi", 00:07:54.149 "config": [ 00:07:54.149 { 00:07:54.149 "method": "iscsi_set_options", 00:07:54.149 "params": { 00:07:54.149 "node_base": "iqn.2016-06.io.spdk", 00:07:54.149 "max_sessions": 128, 00:07:54.149 "max_connections_per_session": 2, 00:07:54.149 "max_queue_depth": 64, 00:07:54.149 "default_time2wait": 2, 00:07:54.149 "default_time2retain": 20, 00:07:54.149 "first_burst_length": 8192, 00:07:54.149 "immediate_data": true, 00:07:54.149 "allow_duplicated_isid": false, 00:07:54.149 "error_recovery_level": 0, 00:07:54.149 "nop_timeout": 60, 00:07:54.149 "nop_in_interval": 30, 00:07:54.149 "disable_chap": false, 00:07:54.149 "require_chap": false, 00:07:54.149 "mutual_chap": false, 00:07:54.149 "chap_group": 0, 00:07:54.149 "max_large_datain_per_connection": 64, 00:07:54.149 "max_r2t_per_connection": 4, 00:07:54.149 "pdu_pool_size": 36864, 00:07:54.149 "immediate_data_pool_size": 16384, 00:07:54.149 "data_out_pool_size": 2048 00:07:54.149 } 00:07:54.149 } 00:07:54.149 ] 00:07:54.149 } 00:07:54.149 ] 00:07:54.149 } 00:07:54.149 11:21:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:54.149 11:21:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58455 00:07:54.149 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58455 ']' 00:07:54.149 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58455 00:07:54.149 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:54.149 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.149 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58455 00:07:54.149 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.149 killing process with pid 58455 00:07:54.149 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.149 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58455' 00:07:54.149 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58455 00:07:54.149 11:21:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58455 00:07:57.433 11:22:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58516 00:07:57.433 11:22:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:57.433 11:22:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:02.689 11:22:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58516 00:08:02.689 11:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58516 ']' 00:08:02.689 11:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58516 00:08:02.689 11:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:02.689 11:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.689 11:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58516 00:08:02.689 11:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.689 11:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.689 11:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58516' 00:08:02.689 killing process with pid 58516 00:08:02.689 11:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58516 00:08:02.689 11:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58516 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:05.219 00:08:05.219 real 0m12.429s 00:08:05.219 user 0m11.837s 00:08:05.219 sys 0m0.956s 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:05.219 ************************************ 00:08:05.219 END TEST skip_rpc_with_json 00:08:05.219 ************************************ 00:08:05.219 11:22:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:05.219 11:22:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.219 11:22:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.219 11:22:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.219 ************************************ 00:08:05.219 START TEST skip_rpc_with_delay 00:08:05.219 ************************************ 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:05.219 [2024-11-20 11:22:10.739691] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:05.219 00:08:05.219 real 0m0.183s 00:08:05.219 user 0m0.091s 00:08:05.219 sys 0m0.090s 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.219 11:22:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:05.219 ************************************ 00:08:05.219 END TEST skip_rpc_with_delay 00:08:05.219 ************************************ 00:08:05.219 11:22:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:05.219 11:22:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:05.219 11:22:10 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:05.219 11:22:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.219 11:22:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.219 11:22:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.219 ************************************ 00:08:05.219 START TEST exit_on_failed_rpc_init 00:08:05.219 ************************************ 00:08:05.219 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:08:05.219 11:22:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58655 00:08:05.219 11:22:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58655 00:08:05.219 11:22:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:05.219 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58655 ']' 00:08:05.219 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.219 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.219 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.219 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.219 11:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:05.219 [2024-11-20 11:22:10.973962] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:08:05.219 [2024-11-20 11:22:10.974112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58655 ] 00:08:05.478 [2024-11-20 11:22:11.165317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.736 [2024-11-20 11:22:11.341009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.108 11:22:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.108 11:22:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:07.108 11:22:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:07.108 11:22:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:07.108 11:22:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:07.108 11:22:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:07.108 11:22:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:07.108 11:22:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.108 11:22:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:07.108 11:22:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.108 11:22:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:07.108 11:22:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.108 11:22:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:07.108 11:22:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:07.108 11:22:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:07.108 [2024-11-20 11:22:12.618747] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:08:07.108 [2024-11-20 11:22:12.618937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58683 ] 00:08:07.108 [2024-11-20 11:22:12.826511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.397 [2024-11-20 11:22:13.011206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.397 [2024-11-20 11:22:13.011366] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:07.397 [2024-11-20 11:22:13.011393] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:07.397 [2024-11-20 11:22:13.011425] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.656 11:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:07.656 11:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:07.656 11:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:07.656 11:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:07.656 11:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:07.656 11:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:07.656 11:22:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:07.656 11:22:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58655 00:08:07.656 11:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58655 ']' 00:08:07.656 11:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58655 00:08:07.656 11:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:07.656 11:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.656 11:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58655 00:08:07.656 11:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.656 killing process with pid 58655 00:08:07.656 11:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.657 11:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58655' 00:08:07.657 11:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58655 00:08:07.657 11:22:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58655 00:08:10.995 00:08:10.995 real 0m5.423s 00:08:10.996 user 0m5.975s 00:08:10.996 sys 0m0.706s 00:08:10.996 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.996 11:22:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:10.996 ************************************ 00:08:10.996 END TEST exit_on_failed_rpc_init 00:08:10.996 ************************************ 00:08:10.996 11:22:16 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:10.996 ************************************ 00:08:10.996 END TEST skip_rpc 00:08:10.996 ************************************ 00:08:10.996 00:08:10.996 real 0m26.181s 00:08:10.996 user 0m25.303s 00:08:10.996 sys 0m2.403s 00:08:10.996 11:22:16 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.996 11:22:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.996 11:22:16 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:10.996 11:22:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.996 11:22:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.996 11:22:16 -- common/autotest_common.sh@10 -- # set +x 00:08:10.996 ************************************ 00:08:10.996 START TEST rpc_client 00:08:10.996 ************************************ 00:08:10.996 11:22:16 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:10.996 * Looking for test storage... 00:08:10.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:10.996 11:22:16 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:10.996 11:22:16 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:08:10.996 11:22:16 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:10.996 11:22:16 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.996 11:22:16 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:10.996 11:22:16 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.996 11:22:16 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:10.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.996 --rc genhtml_branch_coverage=1 00:08:10.996 --rc genhtml_function_coverage=1 00:08:10.996 --rc genhtml_legend=1 00:08:10.996 --rc geninfo_all_blocks=1 00:08:10.996 --rc geninfo_unexecuted_blocks=1 00:08:10.996 00:08:10.996 ' 00:08:10.996 11:22:16 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:10.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.996 --rc genhtml_branch_coverage=1 00:08:10.996 --rc genhtml_function_coverage=1 00:08:10.996 --rc genhtml_legend=1 00:08:10.996 --rc geninfo_all_blocks=1 00:08:10.996 --rc geninfo_unexecuted_blocks=1 00:08:10.996 00:08:10.996 ' 00:08:10.996 11:22:16 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:10.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.996 --rc genhtml_branch_coverage=1 00:08:10.996 --rc genhtml_function_coverage=1 00:08:10.996 --rc genhtml_legend=1 00:08:10.996 --rc geninfo_all_blocks=1 00:08:10.996 --rc geninfo_unexecuted_blocks=1 00:08:10.996 00:08:10.996 ' 00:08:10.996 11:22:16 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:10.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.996 --rc genhtml_branch_coverage=1 00:08:10.996 --rc genhtml_function_coverage=1 00:08:10.996 --rc genhtml_legend=1 00:08:10.996 --rc geninfo_all_blocks=1 00:08:10.996 --rc geninfo_unexecuted_blocks=1 00:08:10.996 00:08:10.996 ' 00:08:10.996 11:22:16 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:10.996 OK 00:08:10.996 11:22:16 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:10.996 00:08:10.996 real 0m0.289s 00:08:10.996 user 0m0.176s 00:08:10.996 sys 0m0.126s 00:08:10.996 11:22:16 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.996 ************************************ 00:08:10.996 END TEST rpc_client 00:08:10.996 ************************************ 00:08:10.996 11:22:16 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:10.996 11:22:16 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:10.996 11:22:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.996 11:22:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.996 11:22:16 -- common/autotest_common.sh@10 -- # set +x 00:08:10.996 ************************************ 00:08:10.996 START TEST json_config 00:08:10.996 ************************************ 00:08:10.996 11:22:16 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:11.255 11:22:16 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:11.255 11:22:16 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:08:11.255 11:22:16 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:11.255 11:22:16 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:11.255 11:22:16 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.255 11:22:16 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.255 11:22:16 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.255 11:22:16 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.255 11:22:16 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.255 11:22:16 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.255 11:22:16 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.255 11:22:16 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.255 11:22:16 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.255 11:22:16 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.255 11:22:16 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.255 11:22:16 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:11.255 11:22:16 json_config -- scripts/common.sh@345 -- # : 1 00:08:11.255 11:22:16 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.255 11:22:16 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.255 11:22:16 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:11.255 11:22:16 json_config -- scripts/common.sh@353 -- # local d=1 00:08:11.255 11:22:16 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.255 11:22:16 json_config -- scripts/common.sh@355 -- # echo 1 00:08:11.255 11:22:16 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.255 11:22:16 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:11.255 11:22:16 json_config -- scripts/common.sh@353 -- # local d=2 00:08:11.256 11:22:16 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.256 11:22:16 json_config -- scripts/common.sh@355 -- # echo 2 00:08:11.256 11:22:16 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.256 11:22:16 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.256 11:22:16 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.256 11:22:16 json_config -- scripts/common.sh@368 -- # return 0 00:08:11.256 11:22:16 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.256 11:22:16 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:11.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.256 --rc genhtml_branch_coverage=1 00:08:11.256 --rc genhtml_function_coverage=1 00:08:11.256 --rc genhtml_legend=1 00:08:11.256 --rc geninfo_all_blocks=1 00:08:11.256 --rc geninfo_unexecuted_blocks=1 00:08:11.256 00:08:11.256 ' 00:08:11.256 11:22:16 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:11.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.256 --rc genhtml_branch_coverage=1 00:08:11.256 --rc genhtml_function_coverage=1 00:08:11.256 --rc genhtml_legend=1 00:08:11.256 --rc geninfo_all_blocks=1 00:08:11.256 --rc geninfo_unexecuted_blocks=1 00:08:11.256 00:08:11.256 ' 00:08:11.256 11:22:16 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:11.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.256 --rc genhtml_branch_coverage=1 00:08:11.256 --rc genhtml_function_coverage=1 00:08:11.256 --rc genhtml_legend=1 00:08:11.256 --rc geninfo_all_blocks=1 00:08:11.256 --rc geninfo_unexecuted_blocks=1 00:08:11.256 00:08:11.256 ' 00:08:11.256 11:22:16 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:11.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.256 --rc genhtml_branch_coverage=1 00:08:11.256 --rc genhtml_function_coverage=1 00:08:11.256 --rc genhtml_legend=1 00:08:11.256 --rc geninfo_all_blocks=1 00:08:11.256 --rc geninfo_unexecuted_blocks=1 00:08:11.256 00:08:11.256 ' 00:08:11.256 11:22:16 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:17e3f908-f4fb-4b01-a2dd-8d15d253729f 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=17e3f908-f4fb-4b01-a2dd-8d15d253729f 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.256 11:22:16 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.256 11:22:16 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.256 11:22:16 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.256 11:22:16 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.256 11:22:16 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.256 11:22:16 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.256 11:22:16 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.256 11:22:16 json_config -- paths/export.sh@5 -- # export PATH 00:08:11.256 11:22:16 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@51 -- # : 0 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:11.256 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:11.256 11:22:16 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:11.256 11:22:16 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:11.256 11:22:16 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:11.256 11:22:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:11.256 11:22:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:11.256 11:22:16 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:11.256 11:22:16 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:11.256 WARNING: No tests are enabled so not running JSON configuration tests 00:08:11.256 11:22:16 json_config -- json_config/json_config.sh@28 -- # exit 0 00:08:11.256 00:08:11.256 real 0m0.167s 00:08:11.256 user 0m0.106s 00:08:11.256 sys 0m0.065s 00:08:11.256 11:22:16 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.256 ************************************ 00:08:11.256 END TEST json_config 00:08:11.256 ************************************ 00:08:11.256 11:22:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:11.256 11:22:16 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:11.256 11:22:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.256 11:22:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.256 11:22:16 -- common/autotest_common.sh@10 -- # set +x 00:08:11.256 ************************************ 00:08:11.256 START TEST json_config_extra_key 00:08:11.256 ************************************ 00:08:11.256 11:22:16 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:11.256 11:22:17 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:11.256 11:22:17 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:11.256 11:22:17 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:08:11.516 11:22:17 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:11.516 11:22:17 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.516 11:22:17 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:11.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.516 --rc genhtml_branch_coverage=1 00:08:11.516 --rc genhtml_function_coverage=1 00:08:11.516 --rc genhtml_legend=1 00:08:11.516 --rc geninfo_all_blocks=1 00:08:11.516 --rc geninfo_unexecuted_blocks=1 00:08:11.516 00:08:11.516 ' 00:08:11.516 11:22:17 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:11.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.516 --rc genhtml_branch_coverage=1 00:08:11.516 --rc genhtml_function_coverage=1 00:08:11.516 --rc genhtml_legend=1 00:08:11.516 --rc geninfo_all_blocks=1 00:08:11.516 --rc geninfo_unexecuted_blocks=1 00:08:11.516 00:08:11.516 ' 00:08:11.516 11:22:17 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:11.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.516 --rc genhtml_branch_coverage=1 00:08:11.516 --rc genhtml_function_coverage=1 00:08:11.516 --rc genhtml_legend=1 00:08:11.516 --rc geninfo_all_blocks=1 00:08:11.516 --rc geninfo_unexecuted_blocks=1 00:08:11.516 00:08:11.516 ' 00:08:11.516 11:22:17 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:11.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.516 --rc genhtml_branch_coverage=1 00:08:11.516 --rc genhtml_function_coverage=1 00:08:11.516 --rc genhtml_legend=1 00:08:11.516 --rc geninfo_all_blocks=1 00:08:11.516 --rc geninfo_unexecuted_blocks=1 00:08:11.516 00:08:11.516 ' 00:08:11.516 11:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:17e3f908-f4fb-4b01-a2dd-8d15d253729f 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=17e3f908-f4fb-4b01-a2dd-8d15d253729f 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.516 11:22:17 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.516 11:22:17 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.516 11:22:17 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.516 11:22:17 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.516 11:22:17 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:11.516 11:22:17 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:11.516 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:11.516 11:22:17 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:11.517 11:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:11.517 11:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:11.517 11:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:11.517 INFO: launching applications... 00:08:11.517 11:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:11.517 11:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:11.517 11:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:11.517 11:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:11.517 11:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:11.517 11:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:11.517 11:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:11.517 11:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:11.517 11:22:17 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:11.517 11:22:17 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:11.517 11:22:17 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:11.517 11:22:17 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:11.517 11:22:17 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:11.517 11:22:17 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:11.517 11:22:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:11.517 11:22:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:11.517 11:22:17 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58894 00:08:11.517 11:22:17 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:11.517 Waiting for target to run... 00:08:11.517 11:22:17 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:11.517 11:22:17 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58894 /var/tmp/spdk_tgt.sock 00:08:11.517 11:22:17 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58894 ']' 00:08:11.517 11:22:17 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:11.517 11:22:17 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.517 11:22:17 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:11.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:11.517 11:22:17 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.517 11:22:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:11.776 [2024-11-20 11:22:17.279945] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:08:11.776 [2024-11-20 11:22:17.280299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58894 ] 00:08:12.033 [2024-11-20 11:22:17.704525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.292 [2024-11-20 11:22:17.869157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.228 11:22:18 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.228 00:08:13.228 INFO: shutting down applications... 00:08:13.228 11:22:18 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:08:13.228 11:22:18 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:13.228 11:22:18 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:13.228 11:22:18 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:13.228 11:22:18 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:13.228 11:22:18 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:13.228 11:22:18 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58894 ]] 00:08:13.228 11:22:18 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58894 00:08:13.228 11:22:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:13.228 11:22:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:13.228 11:22:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58894 00:08:13.228 11:22:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:13.796 11:22:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:13.796 11:22:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:13.796 11:22:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58894 00:08:13.796 11:22:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:14.054 11:22:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:14.054 11:22:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:14.054 11:22:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58894 00:08:14.054 11:22:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:14.622 11:22:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:14.622 11:22:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:14.622 11:22:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58894 00:08:14.622 11:22:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:15.270 11:22:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:15.270 11:22:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:15.270 11:22:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58894 00:08:15.270 11:22:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:15.530 11:22:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:15.530 11:22:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:15.530 11:22:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58894 00:08:15.530 11:22:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:16.099 11:22:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:16.099 11:22:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:16.099 11:22:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58894 00:08:16.099 11:22:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:16.666 11:22:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:16.666 11:22:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:16.666 11:22:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58894 00:08:16.666 SPDK target shutdown done 00:08:16.666 Success 00:08:16.666 11:22:22 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:16.666 11:22:22 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:16.666 11:22:22 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:16.666 11:22:22 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:16.666 11:22:22 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:16.666 00:08:16.666 real 0m5.342s 00:08:16.666 user 0m4.990s 00:08:16.666 sys 0m0.618s 00:08:16.666 11:22:22 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.666 11:22:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:16.666 ************************************ 00:08:16.666 END TEST json_config_extra_key 00:08:16.666 ************************************ 00:08:16.666 11:22:22 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:16.666 11:22:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.666 11:22:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.666 11:22:22 -- common/autotest_common.sh@10 -- # set +x 00:08:16.666 ************************************ 00:08:16.666 START TEST alias_rpc 00:08:16.666 ************************************ 00:08:16.666 11:22:22 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:16.925 * Looking for test storage... 00:08:16.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:16.925 11:22:22 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:16.925 11:22:22 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:16.925 11:22:22 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:16.925 11:22:22 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.925 11:22:22 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.926 11:22:22 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.926 11:22:22 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:16.926 11:22:22 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.926 11:22:22 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:16.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.926 --rc genhtml_branch_coverage=1 00:08:16.926 --rc genhtml_function_coverage=1 00:08:16.926 --rc genhtml_legend=1 00:08:16.926 --rc geninfo_all_blocks=1 00:08:16.926 --rc geninfo_unexecuted_blocks=1 00:08:16.926 00:08:16.926 ' 00:08:16.926 11:22:22 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:16.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.926 --rc genhtml_branch_coverage=1 00:08:16.926 --rc genhtml_function_coverage=1 00:08:16.926 --rc genhtml_legend=1 00:08:16.926 --rc geninfo_all_blocks=1 00:08:16.926 --rc geninfo_unexecuted_blocks=1 00:08:16.926 00:08:16.926 ' 00:08:16.926 11:22:22 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:16.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.926 --rc genhtml_branch_coverage=1 00:08:16.926 --rc genhtml_function_coverage=1 00:08:16.926 --rc genhtml_legend=1 00:08:16.926 --rc geninfo_all_blocks=1 00:08:16.926 --rc geninfo_unexecuted_blocks=1 00:08:16.926 00:08:16.926 ' 00:08:16.926 11:22:22 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:16.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.926 --rc genhtml_branch_coverage=1 00:08:16.926 --rc genhtml_function_coverage=1 00:08:16.926 --rc genhtml_legend=1 00:08:16.926 --rc geninfo_all_blocks=1 00:08:16.926 --rc geninfo_unexecuted_blocks=1 00:08:16.926 00:08:16.926 ' 00:08:16.926 11:22:22 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:16.926 11:22:22 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59012 00:08:16.926 11:22:22 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:16.926 11:22:22 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59012 00:08:16.926 11:22:22 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59012 ']' 00:08:16.926 11:22:22 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.926 11:22:22 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.926 11:22:22 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.926 11:22:22 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.926 11:22:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.184 [2024-11-20 11:22:22.691446] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:08:17.184 [2024-11-20 11:22:22.691862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59012 ] 00:08:17.184 [2024-11-20 11:22:22.888950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.443 [2024-11-20 11:22:23.027931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.378 11:22:23 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.378 11:22:23 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:18.378 11:22:23 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:18.636 11:22:24 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59012 00:08:18.636 11:22:24 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59012 ']' 00:08:18.636 11:22:24 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59012 00:08:18.636 11:22:24 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:08:18.636 11:22:24 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.636 11:22:24 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59012 00:08:18.636 killing process with pid 59012 00:08:18.636 11:22:24 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.636 11:22:24 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.636 11:22:24 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59012' 00:08:18.636 11:22:24 alias_rpc -- common/autotest_common.sh@973 -- # kill 59012 00:08:18.636 11:22:24 alias_rpc -- common/autotest_common.sh@978 -- # wait 59012 00:08:21.924 ************************************ 00:08:21.924 END TEST alias_rpc 00:08:21.924 ************************************ 00:08:21.924 00:08:21.924 real 0m4.617s 00:08:21.924 user 0m4.612s 00:08:21.924 sys 0m0.646s 00:08:21.924 11:22:26 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.924 11:22:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.924 11:22:27 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:21.924 11:22:27 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:21.924 11:22:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.924 11:22:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.924 11:22:27 -- common/autotest_common.sh@10 -- # set +x 00:08:21.924 ************************************ 00:08:21.924 START TEST spdkcli_tcp 00:08:21.924 ************************************ 00:08:21.924 11:22:27 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:21.924 * Looking for test storage... 00:08:21.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:21.924 11:22:27 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:21.924 11:22:27 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:21.924 11:22:27 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:21.924 11:22:27 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.924 11:22:27 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:21.924 11:22:27 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.924 11:22:27 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:21.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.924 --rc genhtml_branch_coverage=1 00:08:21.924 --rc genhtml_function_coverage=1 00:08:21.924 --rc genhtml_legend=1 00:08:21.924 --rc geninfo_all_blocks=1 00:08:21.924 --rc geninfo_unexecuted_blocks=1 00:08:21.924 00:08:21.924 ' 00:08:21.924 11:22:27 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:21.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.924 --rc genhtml_branch_coverage=1 00:08:21.924 --rc genhtml_function_coverage=1 00:08:21.924 --rc genhtml_legend=1 00:08:21.924 --rc geninfo_all_blocks=1 00:08:21.924 --rc geninfo_unexecuted_blocks=1 00:08:21.924 00:08:21.924 ' 00:08:21.924 11:22:27 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:21.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.924 --rc genhtml_branch_coverage=1 00:08:21.924 --rc genhtml_function_coverage=1 00:08:21.924 --rc genhtml_legend=1 00:08:21.925 --rc geninfo_all_blocks=1 00:08:21.925 --rc geninfo_unexecuted_blocks=1 00:08:21.925 00:08:21.925 ' 00:08:21.925 11:22:27 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:21.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.925 --rc genhtml_branch_coverage=1 00:08:21.925 --rc genhtml_function_coverage=1 00:08:21.925 --rc genhtml_legend=1 00:08:21.925 --rc geninfo_all_blocks=1 00:08:21.925 --rc geninfo_unexecuted_blocks=1 00:08:21.925 00:08:21.925 ' 00:08:21.925 11:22:27 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:21.925 11:22:27 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:21.925 11:22:27 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:21.925 11:22:27 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:21.925 11:22:27 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:21.925 11:22:27 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:21.925 11:22:27 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:21.925 11:22:27 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.925 11:22:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:21.925 11:22:27 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59130 00:08:21.925 11:22:27 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59130 00:08:21.925 11:22:27 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:21.925 11:22:27 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59130 ']' 00:08:21.925 11:22:27 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.925 11:22:27 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.925 11:22:27 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.925 11:22:27 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.925 11:22:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:21.925 [2024-11-20 11:22:27.373727] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:08:21.925 [2024-11-20 11:22:27.374115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59130 ] 00:08:21.925 [2024-11-20 11:22:27.551902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:22.185 [2024-11-20 11:22:27.714043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.185 [2024-11-20 11:22:27.714086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.119 11:22:28 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.119 11:22:28 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:08:23.119 11:22:28 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59147 00:08:23.119 11:22:28 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:23.119 11:22:28 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:23.378 [ 00:08:23.378 "bdev_malloc_delete", 00:08:23.378 "bdev_malloc_create", 00:08:23.378 "bdev_null_resize", 00:08:23.378 "bdev_null_delete", 00:08:23.378 "bdev_null_create", 00:08:23.378 "bdev_nvme_cuse_unregister", 00:08:23.378 "bdev_nvme_cuse_register", 00:08:23.378 "bdev_opal_new_user", 00:08:23.378 "bdev_opal_set_lock_state", 00:08:23.378 "bdev_opal_delete", 00:08:23.378 "bdev_opal_get_info", 00:08:23.378 "bdev_opal_create", 00:08:23.378 "bdev_nvme_opal_revert", 00:08:23.378 "bdev_nvme_opal_init", 00:08:23.378 "bdev_nvme_send_cmd", 00:08:23.378 "bdev_nvme_set_keys", 00:08:23.378 "bdev_nvme_get_path_iostat", 00:08:23.378 "bdev_nvme_get_mdns_discovery_info", 00:08:23.378 "bdev_nvme_stop_mdns_discovery", 00:08:23.378 "bdev_nvme_start_mdns_discovery", 00:08:23.378 "bdev_nvme_set_multipath_policy", 00:08:23.378 "bdev_nvme_set_preferred_path", 00:08:23.378 "bdev_nvme_get_io_paths", 00:08:23.378 "bdev_nvme_remove_error_injection", 00:08:23.378 "bdev_nvme_add_error_injection", 00:08:23.378 "bdev_nvme_get_discovery_info", 00:08:23.378 "bdev_nvme_stop_discovery", 00:08:23.378 "bdev_nvme_start_discovery", 00:08:23.378 "bdev_nvme_get_controller_health_info", 00:08:23.378 "bdev_nvme_disable_controller", 00:08:23.378 "bdev_nvme_enable_controller", 00:08:23.378 "bdev_nvme_reset_controller", 00:08:23.378 "bdev_nvme_get_transport_statistics", 00:08:23.378 "bdev_nvme_apply_firmware", 00:08:23.378 "bdev_nvme_detach_controller", 00:08:23.378 "bdev_nvme_get_controllers", 00:08:23.378 "bdev_nvme_attach_controller", 00:08:23.379 "bdev_nvme_set_hotplug", 00:08:23.379 "bdev_nvme_set_options", 00:08:23.379 "bdev_passthru_delete", 00:08:23.379 "bdev_passthru_create", 00:08:23.379 "bdev_lvol_set_parent_bdev", 00:08:23.379 "bdev_lvol_set_parent", 00:08:23.379 "bdev_lvol_check_shallow_copy", 00:08:23.379 "bdev_lvol_start_shallow_copy", 00:08:23.379 "bdev_lvol_grow_lvstore", 00:08:23.379 "bdev_lvol_get_lvols", 00:08:23.379 "bdev_lvol_get_lvstores", 00:08:23.379 "bdev_lvol_delete", 00:08:23.379 "bdev_lvol_set_read_only", 00:08:23.379 "bdev_lvol_resize", 00:08:23.379 "bdev_lvol_decouple_parent", 00:08:23.379 "bdev_lvol_inflate", 00:08:23.379 "bdev_lvol_rename", 00:08:23.379 "bdev_lvol_clone_bdev", 00:08:23.379 "bdev_lvol_clone", 00:08:23.379 "bdev_lvol_snapshot", 00:08:23.379 "bdev_lvol_create", 00:08:23.379 "bdev_lvol_delete_lvstore", 00:08:23.379 "bdev_lvol_rename_lvstore", 00:08:23.379 "bdev_lvol_create_lvstore", 00:08:23.379 "bdev_raid_set_options", 00:08:23.379 "bdev_raid_remove_base_bdev", 00:08:23.379 "bdev_raid_add_base_bdev", 00:08:23.379 "bdev_raid_delete", 00:08:23.379 "bdev_raid_create", 00:08:23.379 "bdev_raid_get_bdevs", 00:08:23.379 "bdev_error_inject_error", 00:08:23.379 "bdev_error_delete", 00:08:23.379 "bdev_error_create", 00:08:23.379 "bdev_split_delete", 00:08:23.379 "bdev_split_create", 00:08:23.379 "bdev_delay_delete", 00:08:23.379 "bdev_delay_create", 00:08:23.379 "bdev_delay_update_latency", 00:08:23.379 "bdev_zone_block_delete", 00:08:23.379 "bdev_zone_block_create", 00:08:23.379 "blobfs_create", 00:08:23.379 "blobfs_detect", 00:08:23.379 "blobfs_set_cache_size", 00:08:23.379 "bdev_xnvme_delete", 00:08:23.379 "bdev_xnvme_create", 00:08:23.379 "bdev_aio_delete", 00:08:23.379 "bdev_aio_rescan", 00:08:23.379 "bdev_aio_create", 00:08:23.379 "bdev_ftl_set_property", 00:08:23.379 "bdev_ftl_get_properties", 00:08:23.379 "bdev_ftl_get_stats", 00:08:23.379 "bdev_ftl_unmap", 00:08:23.379 "bdev_ftl_unload", 00:08:23.379 "bdev_ftl_delete", 00:08:23.379 "bdev_ftl_load", 00:08:23.379 "bdev_ftl_create", 00:08:23.379 "bdev_virtio_attach_controller", 00:08:23.379 "bdev_virtio_scsi_get_devices", 00:08:23.379 "bdev_virtio_detach_controller", 00:08:23.379 "bdev_virtio_blk_set_hotplug", 00:08:23.379 "bdev_iscsi_delete", 00:08:23.379 "bdev_iscsi_create", 00:08:23.379 "bdev_iscsi_set_options", 00:08:23.379 "accel_error_inject_error", 00:08:23.379 "ioat_scan_accel_module", 00:08:23.379 "dsa_scan_accel_module", 00:08:23.379 "iaa_scan_accel_module", 00:08:23.379 "keyring_file_remove_key", 00:08:23.379 "keyring_file_add_key", 00:08:23.379 "keyring_linux_set_options", 00:08:23.379 "fsdev_aio_delete", 00:08:23.379 "fsdev_aio_create", 00:08:23.379 "iscsi_get_histogram", 00:08:23.379 "iscsi_enable_histogram", 00:08:23.379 "iscsi_set_options", 00:08:23.379 "iscsi_get_auth_groups", 00:08:23.379 "iscsi_auth_group_remove_secret", 00:08:23.379 "iscsi_auth_group_add_secret", 00:08:23.379 "iscsi_delete_auth_group", 00:08:23.379 "iscsi_create_auth_group", 00:08:23.379 "iscsi_set_discovery_auth", 00:08:23.379 "iscsi_get_options", 00:08:23.379 "iscsi_target_node_request_logout", 00:08:23.379 "iscsi_target_node_set_redirect", 00:08:23.379 "iscsi_target_node_set_auth", 00:08:23.379 "iscsi_target_node_add_lun", 00:08:23.379 "iscsi_get_stats", 00:08:23.379 "iscsi_get_connections", 00:08:23.379 "iscsi_portal_group_set_auth", 00:08:23.379 "iscsi_start_portal_group", 00:08:23.379 "iscsi_delete_portal_group", 00:08:23.379 "iscsi_create_portal_group", 00:08:23.379 "iscsi_get_portal_groups", 00:08:23.379 "iscsi_delete_target_node", 00:08:23.379 "iscsi_target_node_remove_pg_ig_maps", 00:08:23.379 "iscsi_target_node_add_pg_ig_maps", 00:08:23.379 "iscsi_create_target_node", 00:08:23.379 "iscsi_get_target_nodes", 00:08:23.379 "iscsi_delete_initiator_group", 00:08:23.379 "iscsi_initiator_group_remove_initiators", 00:08:23.379 "iscsi_initiator_group_add_initiators", 00:08:23.379 "iscsi_create_initiator_group", 00:08:23.379 "iscsi_get_initiator_groups", 00:08:23.379 "nvmf_set_crdt", 00:08:23.379 "nvmf_set_config", 00:08:23.379 "nvmf_set_max_subsystems", 00:08:23.379 "nvmf_stop_mdns_prr", 00:08:23.379 "nvmf_publish_mdns_prr", 00:08:23.379 "nvmf_subsystem_get_listeners", 00:08:23.379 "nvmf_subsystem_get_qpairs", 00:08:23.379 "nvmf_subsystem_get_controllers", 00:08:23.379 "nvmf_get_stats", 00:08:23.379 "nvmf_get_transports", 00:08:23.379 "nvmf_create_transport", 00:08:23.379 "nvmf_get_targets", 00:08:23.379 "nvmf_delete_target", 00:08:23.379 "nvmf_create_target", 00:08:23.379 "nvmf_subsystem_allow_any_host", 00:08:23.379 "nvmf_subsystem_set_keys", 00:08:23.379 "nvmf_subsystem_remove_host", 00:08:23.379 "nvmf_subsystem_add_host", 00:08:23.379 "nvmf_ns_remove_host", 00:08:23.379 "nvmf_ns_add_host", 00:08:23.379 "nvmf_subsystem_remove_ns", 00:08:23.379 "nvmf_subsystem_set_ns_ana_group", 00:08:23.379 "nvmf_subsystem_add_ns", 00:08:23.379 "nvmf_subsystem_listener_set_ana_state", 00:08:23.379 "nvmf_discovery_get_referrals", 00:08:23.379 "nvmf_discovery_remove_referral", 00:08:23.379 "nvmf_discovery_add_referral", 00:08:23.379 "nvmf_subsystem_remove_listener", 00:08:23.379 "nvmf_subsystem_add_listener", 00:08:23.379 "nvmf_delete_subsystem", 00:08:23.379 "nvmf_create_subsystem", 00:08:23.379 "nvmf_get_subsystems", 00:08:23.379 "env_dpdk_get_mem_stats", 00:08:23.379 "nbd_get_disks", 00:08:23.379 "nbd_stop_disk", 00:08:23.379 "nbd_start_disk", 00:08:23.379 "ublk_recover_disk", 00:08:23.379 "ublk_get_disks", 00:08:23.379 "ublk_stop_disk", 00:08:23.379 "ublk_start_disk", 00:08:23.379 "ublk_destroy_target", 00:08:23.379 "ublk_create_target", 00:08:23.379 "virtio_blk_create_transport", 00:08:23.379 "virtio_blk_get_transports", 00:08:23.379 "vhost_controller_set_coalescing", 00:08:23.379 "vhost_get_controllers", 00:08:23.379 "vhost_delete_controller", 00:08:23.379 "vhost_create_blk_controller", 00:08:23.379 "vhost_scsi_controller_remove_target", 00:08:23.379 "vhost_scsi_controller_add_target", 00:08:23.379 "vhost_start_scsi_controller", 00:08:23.379 "vhost_create_scsi_controller", 00:08:23.379 "thread_set_cpumask", 00:08:23.379 "scheduler_set_options", 00:08:23.379 "framework_get_governor", 00:08:23.379 "framework_get_scheduler", 00:08:23.379 "framework_set_scheduler", 00:08:23.379 "framework_get_reactors", 00:08:23.379 "thread_get_io_channels", 00:08:23.379 "thread_get_pollers", 00:08:23.379 "thread_get_stats", 00:08:23.379 "framework_monitor_context_switch", 00:08:23.379 "spdk_kill_instance", 00:08:23.379 "log_enable_timestamps", 00:08:23.379 "log_get_flags", 00:08:23.379 "log_clear_flag", 00:08:23.379 "log_set_flag", 00:08:23.379 "log_get_level", 00:08:23.379 "log_set_level", 00:08:23.379 "log_get_print_level", 00:08:23.379 "log_set_print_level", 00:08:23.379 "framework_enable_cpumask_locks", 00:08:23.379 "framework_disable_cpumask_locks", 00:08:23.379 "framework_wait_init", 00:08:23.379 "framework_start_init", 00:08:23.379 "scsi_get_devices", 00:08:23.379 "bdev_get_histogram", 00:08:23.379 "bdev_enable_histogram", 00:08:23.379 "bdev_set_qos_limit", 00:08:23.379 "bdev_set_qd_sampling_period", 00:08:23.379 "bdev_get_bdevs", 00:08:23.379 "bdev_reset_iostat", 00:08:23.379 "bdev_get_iostat", 00:08:23.379 "bdev_examine", 00:08:23.379 "bdev_wait_for_examine", 00:08:23.379 "bdev_set_options", 00:08:23.379 "accel_get_stats", 00:08:23.379 "accel_set_options", 00:08:23.379 "accel_set_driver", 00:08:23.379 "accel_crypto_key_destroy", 00:08:23.379 "accel_crypto_keys_get", 00:08:23.379 "accel_crypto_key_create", 00:08:23.379 "accel_assign_opc", 00:08:23.379 "accel_get_module_info", 00:08:23.379 "accel_get_opc_assignments", 00:08:23.379 "vmd_rescan", 00:08:23.379 "vmd_remove_device", 00:08:23.379 "vmd_enable", 00:08:23.379 "sock_get_default_impl", 00:08:23.379 "sock_set_default_impl", 00:08:23.379 "sock_impl_set_options", 00:08:23.379 "sock_impl_get_options", 00:08:23.379 "iobuf_get_stats", 00:08:23.379 "iobuf_set_options", 00:08:23.379 "keyring_get_keys", 00:08:23.379 "framework_get_pci_devices", 00:08:23.379 "framework_get_config", 00:08:23.379 "framework_get_subsystems", 00:08:23.379 "fsdev_set_opts", 00:08:23.379 "fsdev_get_opts", 00:08:23.379 "trace_get_info", 00:08:23.379 "trace_get_tpoint_group_mask", 00:08:23.379 "trace_disable_tpoint_group", 00:08:23.379 "trace_enable_tpoint_group", 00:08:23.379 "trace_clear_tpoint_mask", 00:08:23.379 "trace_set_tpoint_mask", 00:08:23.379 "notify_get_notifications", 00:08:23.379 "notify_get_types", 00:08:23.379 "spdk_get_version", 00:08:23.379 "rpc_get_methods" 00:08:23.379 ] 00:08:23.379 11:22:28 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:23.379 11:22:28 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:23.379 11:22:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:23.379 11:22:29 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:23.379 11:22:29 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59130 00:08:23.379 11:22:29 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59130 ']' 00:08:23.379 11:22:29 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59130 00:08:23.379 11:22:29 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:08:23.379 11:22:29 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.379 11:22:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59130 00:08:23.380 killing process with pid 59130 00:08:23.380 11:22:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:23.380 11:22:29 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:23.380 11:22:29 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59130' 00:08:23.380 11:22:29 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59130 00:08:23.380 11:22:29 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59130 00:08:26.661 ************************************ 00:08:26.661 END TEST spdkcli_tcp 00:08:26.661 ************************************ 00:08:26.661 00:08:26.661 real 0m4.747s 00:08:26.661 user 0m8.603s 00:08:26.661 sys 0m0.685s 00:08:26.661 11:22:31 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.661 11:22:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:26.661 11:22:31 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:26.661 11:22:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.661 11:22:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.661 11:22:31 -- common/autotest_common.sh@10 -- # set +x 00:08:26.661 ************************************ 00:08:26.661 START TEST dpdk_mem_utility 00:08:26.661 ************************************ 00:08:26.661 11:22:31 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:26.661 * Looking for test storage... 00:08:26.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:26.661 11:22:31 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:26.661 11:22:31 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:08:26.661 11:22:31 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:26.661 11:22:31 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:26.661 11:22:31 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.661 11:22:31 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:26.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.661 11:22:32 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:26.661 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.661 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:26.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.661 --rc genhtml_branch_coverage=1 00:08:26.661 --rc genhtml_function_coverage=1 00:08:26.661 --rc genhtml_legend=1 00:08:26.661 --rc geninfo_all_blocks=1 00:08:26.661 --rc geninfo_unexecuted_blocks=1 00:08:26.661 00:08:26.661 ' 00:08:26.661 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:26.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.661 --rc genhtml_branch_coverage=1 00:08:26.661 --rc genhtml_function_coverage=1 00:08:26.661 --rc genhtml_legend=1 00:08:26.661 --rc geninfo_all_blocks=1 00:08:26.661 --rc geninfo_unexecuted_blocks=1 00:08:26.661 00:08:26.661 ' 00:08:26.661 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:26.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.661 --rc genhtml_branch_coverage=1 00:08:26.661 --rc genhtml_function_coverage=1 00:08:26.661 --rc genhtml_legend=1 00:08:26.661 --rc geninfo_all_blocks=1 00:08:26.661 --rc geninfo_unexecuted_blocks=1 00:08:26.661 00:08:26.661 ' 00:08:26.661 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:26.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.661 --rc genhtml_branch_coverage=1 00:08:26.661 --rc genhtml_function_coverage=1 00:08:26.661 --rc genhtml_legend=1 00:08:26.661 --rc geninfo_all_blocks=1 00:08:26.661 --rc geninfo_unexecuted_blocks=1 00:08:26.661 00:08:26.661 ' 00:08:26.661 11:22:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:26.661 11:22:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59258 00:08:26.661 11:22:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59258 00:08:26.661 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59258 ']' 00:08:26.661 11:22:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:26.661 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.661 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.661 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.661 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.661 11:22:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:26.661 [2024-11-20 11:22:32.181165] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:08:26.661 [2024-11-20 11:22:32.181602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59258 ] 00:08:26.661 [2024-11-20 11:22:32.376004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.920 [2024-11-20 11:22:32.519756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.856 11:22:33 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.856 11:22:33 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:27.856 11:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:27.856 11:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:27.856 11:22:33 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.856 11:22:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:27.856 { 00:08:27.856 "filename": "/tmp/spdk_mem_dump.txt" 00:08:27.856 } 00:08:27.856 11:22:33 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.856 11:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:28.117 DPDK memory size 816.000000 MiB in 1 heap(s) 00:08:28.117 1 heaps totaling size 816.000000 MiB 00:08:28.117 size: 816.000000 MiB heap id: 0 00:08:28.117 end heaps---------- 00:08:28.117 9 mempools totaling size 595.772034 MiB 00:08:28.117 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:28.117 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:28.117 size: 92.545471 MiB name: bdev_io_59258 00:08:28.117 size: 50.003479 MiB name: msgpool_59258 00:08:28.117 size: 36.509338 MiB name: fsdev_io_59258 00:08:28.117 size: 21.763794 MiB name: PDU_Pool 00:08:28.117 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:28.117 size: 4.133484 MiB name: evtpool_59258 00:08:28.117 size: 0.026123 MiB name: Session_Pool 00:08:28.117 end mempools------- 00:08:28.117 6 memzones totaling size 4.142822 MiB 00:08:28.117 size: 1.000366 MiB name: RG_ring_0_59258 00:08:28.117 size: 1.000366 MiB name: RG_ring_1_59258 00:08:28.117 size: 1.000366 MiB name: RG_ring_4_59258 00:08:28.117 size: 1.000366 MiB name: RG_ring_5_59258 00:08:28.117 size: 0.125366 MiB name: RG_ring_2_59258 00:08:28.117 size: 0.015991 MiB name: RG_ring_3_59258 00:08:28.117 end memzones------- 00:08:28.117 11:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:28.117 heap id: 0 total size: 816.000000 MiB number of busy elements: 317 number of free elements: 18 00:08:28.117 list of free elements. size: 16.790894 MiB 00:08:28.117 element at address: 0x200006400000 with size: 1.995972 MiB 00:08:28.117 element at address: 0x20000a600000 with size: 1.995972 MiB 00:08:28.117 element at address: 0x200003e00000 with size: 1.991028 MiB 00:08:28.117 element at address: 0x200018d00040 with size: 0.999939 MiB 00:08:28.117 element at address: 0x200019100040 with size: 0.999939 MiB 00:08:28.117 element at address: 0x200019200000 with size: 0.999084 MiB 00:08:28.117 element at address: 0x200031e00000 with size: 0.994324 MiB 00:08:28.117 element at address: 0x200000400000 with size: 0.992004 MiB 00:08:28.117 element at address: 0x200018a00000 with size: 0.959656 MiB 00:08:28.117 element at address: 0x200019500040 with size: 0.936401 MiB 00:08:28.117 element at address: 0x200000200000 with size: 0.716980 MiB 00:08:28.117 element at address: 0x20001ac00000 with size: 0.561462 MiB 00:08:28.117 element at address: 0x200000c00000 with size: 0.490173 MiB 00:08:28.117 element at address: 0x200018e00000 with size: 0.487976 MiB 00:08:28.117 element at address: 0x200019600000 with size: 0.485413 MiB 00:08:28.117 element at address: 0x200012c00000 with size: 0.443237 MiB 00:08:28.117 element at address: 0x200028000000 with size: 0.390442 MiB 00:08:28.117 element at address: 0x200000800000 with size: 0.350891 MiB 00:08:28.117 list of standard malloc elements. size: 199.288208 MiB 00:08:28.117 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:08:28.117 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:08:28.117 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:08:28.117 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:08:28.117 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:28.117 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:28.117 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:08:28.117 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:28.117 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:08:28.117 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:08:28.117 element at address: 0x200012bff040 with size: 0.000305 MiB 00:08:28.117 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:08:28.117 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:08:28.117 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:08:28.117 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200000cff000 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012bff180 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012bff280 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012bff380 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012bff480 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012bff580 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012bff680 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012bff780 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012bff880 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012bff980 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012c71780 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012c71880 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012c71980 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012c72080 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012c72180 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:08:28.118 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:08:28.118 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:08:28.118 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:08:28.119 element at address: 0x200028063f40 with size: 0.000244 MiB 00:08:28.119 element at address: 0x200028064040 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806af80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806b080 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806b180 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806b280 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806b380 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806b480 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806b580 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806b680 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806b780 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806b880 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806b980 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806be80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806c080 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806c180 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806c280 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806c380 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806c480 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806c580 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806c680 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806c780 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806c880 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806c980 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806d080 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806d180 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806d280 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806d380 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806d480 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806d580 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806d680 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806d780 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806d880 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806d980 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806da80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806db80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806de80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806df80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806e080 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806e180 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806e280 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806e380 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806e480 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806e580 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806e680 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806e780 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806e880 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806e980 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:08:28.119 element at address: 0x20002806f080 with size: 0.000244 MiB 00:08:28.120 element at address: 0x20002806f180 with size: 0.000244 MiB 00:08:28.120 element at address: 0x20002806f280 with size: 0.000244 MiB 00:08:28.120 element at address: 0x20002806f380 with size: 0.000244 MiB 00:08:28.120 element at address: 0x20002806f480 with size: 0.000244 MiB 00:08:28.120 element at address: 0x20002806f580 with size: 0.000244 MiB 00:08:28.120 element at address: 0x20002806f680 with size: 0.000244 MiB 00:08:28.120 element at address: 0x20002806f780 with size: 0.000244 MiB 00:08:28.120 element at address: 0x20002806f880 with size: 0.000244 MiB 00:08:28.120 element at address: 0x20002806f980 with size: 0.000244 MiB 00:08:28.120 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:08:28.120 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:08:28.120 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:08:28.120 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:08:28.120 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:08:28.120 list of memzone associated elements. size: 599.920898 MiB 00:08:28.120 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:08:28.120 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:28.120 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:08:28.120 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:28.120 element at address: 0x200012df4740 with size: 92.045105 MiB 00:08:28.120 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_59258_0 00:08:28.120 element at address: 0x200000dff340 with size: 48.003113 MiB 00:08:28.120 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59258_0 00:08:28.120 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:08:28.120 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59258_0 00:08:28.120 element at address: 0x2000197be900 with size: 20.255615 MiB 00:08:28.120 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:28.120 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:08:28.120 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:28.120 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:08:28.120 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59258_0 00:08:28.120 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:08:28.120 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59258 00:08:28.120 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:28.120 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59258 00:08:28.120 element at address: 0x200018efde00 with size: 1.008179 MiB 00:08:28.120 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:28.120 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:08:28.120 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:28.120 element at address: 0x200018afde00 with size: 1.008179 MiB 00:08:28.120 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:28.120 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:08:28.120 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:28.120 element at address: 0x200000cff100 with size: 1.000549 MiB 00:08:28.120 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59258 00:08:28.120 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:08:28.120 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59258 00:08:28.120 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:08:28.120 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59258 00:08:28.120 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:08:28.120 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59258 00:08:28.120 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:08:28.120 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59258 00:08:28.120 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:08:28.120 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59258 00:08:28.120 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:08:28.120 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:28.120 element at address: 0x200012c72280 with size: 0.500549 MiB 00:08:28.120 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:28.120 element at address: 0x20001967c440 with size: 0.250549 MiB 00:08:28.120 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:28.120 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:08:28.120 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59258 00:08:28.120 element at address: 0x20000085df80 with size: 0.125549 MiB 00:08:28.120 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59258 00:08:28.120 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:08:28.120 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:28.120 element at address: 0x200028064140 with size: 0.023804 MiB 00:08:28.120 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:28.120 element at address: 0x200000859d40 with size: 0.016174 MiB 00:08:28.120 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59258 00:08:28.120 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:08:28.120 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:28.120 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:08:28.120 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59258 00:08:28.120 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:08:28.120 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59258 00:08:28.120 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:08:28.120 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59258 00:08:28.120 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:08:28.120 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:28.120 11:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:28.120 11:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59258 00:08:28.120 11:22:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59258 ']' 00:08:28.120 11:22:33 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59258 00:08:28.120 11:22:33 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:28.120 11:22:33 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.120 11:22:33 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59258 00:08:28.120 killing process with pid 59258 00:08:28.120 11:22:33 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.120 11:22:33 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.120 11:22:33 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59258' 00:08:28.120 11:22:33 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59258 00:08:28.120 11:22:33 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59258 00:08:31.405 00:08:31.405 real 0m4.766s 00:08:31.405 user 0m4.782s 00:08:31.405 sys 0m0.678s 00:08:31.405 11:22:36 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.405 11:22:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:31.405 ************************************ 00:08:31.405 END TEST dpdk_mem_utility 00:08:31.405 ************************************ 00:08:31.405 11:22:36 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:31.405 11:22:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.405 11:22:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.405 11:22:36 -- common/autotest_common.sh@10 -- # set +x 00:08:31.405 ************************************ 00:08:31.405 START TEST event 00:08:31.405 ************************************ 00:08:31.405 11:22:36 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:31.405 * Looking for test storage... 00:08:31.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:31.405 11:22:36 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:31.405 11:22:36 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:31.405 11:22:36 event -- common/autotest_common.sh@1693 -- # lcov --version 00:08:31.405 11:22:36 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:31.405 11:22:36 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.405 11:22:36 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.405 11:22:36 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.405 11:22:36 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.405 11:22:36 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.405 11:22:36 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.405 11:22:36 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.405 11:22:36 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.405 11:22:36 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.405 11:22:36 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.405 11:22:36 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.405 11:22:36 event -- scripts/common.sh@344 -- # case "$op" in 00:08:31.405 11:22:36 event -- scripts/common.sh@345 -- # : 1 00:08:31.405 11:22:36 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.405 11:22:36 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.405 11:22:36 event -- scripts/common.sh@365 -- # decimal 1 00:08:31.405 11:22:36 event -- scripts/common.sh@353 -- # local d=1 00:08:31.405 11:22:36 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.405 11:22:36 event -- scripts/common.sh@355 -- # echo 1 00:08:31.405 11:22:36 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.405 11:22:36 event -- scripts/common.sh@366 -- # decimal 2 00:08:31.405 11:22:36 event -- scripts/common.sh@353 -- # local d=2 00:08:31.405 11:22:36 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.405 11:22:36 event -- scripts/common.sh@355 -- # echo 2 00:08:31.405 11:22:36 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.405 11:22:36 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.405 11:22:36 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.405 11:22:36 event -- scripts/common.sh@368 -- # return 0 00:08:31.405 11:22:36 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.405 11:22:36 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:31.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.405 --rc genhtml_branch_coverage=1 00:08:31.405 --rc genhtml_function_coverage=1 00:08:31.405 --rc genhtml_legend=1 00:08:31.405 --rc geninfo_all_blocks=1 00:08:31.405 --rc geninfo_unexecuted_blocks=1 00:08:31.405 00:08:31.405 ' 00:08:31.405 11:22:36 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:31.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.405 --rc genhtml_branch_coverage=1 00:08:31.405 --rc genhtml_function_coverage=1 00:08:31.405 --rc genhtml_legend=1 00:08:31.405 --rc geninfo_all_blocks=1 00:08:31.405 --rc geninfo_unexecuted_blocks=1 00:08:31.405 00:08:31.405 ' 00:08:31.405 11:22:36 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:31.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.405 --rc genhtml_branch_coverage=1 00:08:31.405 --rc genhtml_function_coverage=1 00:08:31.405 --rc genhtml_legend=1 00:08:31.405 --rc geninfo_all_blocks=1 00:08:31.405 --rc geninfo_unexecuted_blocks=1 00:08:31.405 00:08:31.405 ' 00:08:31.405 11:22:36 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:31.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.405 --rc genhtml_branch_coverage=1 00:08:31.405 --rc genhtml_function_coverage=1 00:08:31.405 --rc genhtml_legend=1 00:08:31.405 --rc geninfo_all_blocks=1 00:08:31.405 --rc geninfo_unexecuted_blocks=1 00:08:31.405 00:08:31.405 ' 00:08:31.405 11:22:36 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:31.405 11:22:36 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:31.405 11:22:36 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:31.405 11:22:36 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:31.405 11:22:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.405 11:22:36 event -- common/autotest_common.sh@10 -- # set +x 00:08:31.405 ************************************ 00:08:31.405 START TEST event_perf 00:08:31.405 ************************************ 00:08:31.405 11:22:36 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:31.406 Running I/O for 1 seconds...[2024-11-20 11:22:36.910933] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:08:31.406 [2024-11-20 11:22:36.911289] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59371 ] 00:08:31.406 [2024-11-20 11:22:37.113439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.668 [2024-11-20 11:22:37.300281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.668 [2024-11-20 11:22:37.300460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.668 [2024-11-20 11:22:37.300565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.668 [2024-11-20 11:22:37.300563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.083 Running I/O for 1 seconds... 00:08:33.083 lcore 0: 173465 00:08:33.083 lcore 1: 173465 00:08:33.084 lcore 2: 173465 00:08:33.084 lcore 3: 173466 00:08:33.084 done. 00:08:33.084 00:08:33.084 real 0m1.722s 00:08:33.084 user 0m4.453s 00:08:33.084 sys 0m0.145s 00:08:33.084 11:22:38 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.084 11:22:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:33.084 ************************************ 00:08:33.084 END TEST event_perf 00:08:33.084 ************************************ 00:08:33.084 11:22:38 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:33.084 11:22:38 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:33.084 11:22:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.084 11:22:38 event -- common/autotest_common.sh@10 -- # set +x 00:08:33.084 ************************************ 00:08:33.084 START TEST event_reactor 00:08:33.084 ************************************ 00:08:33.084 11:22:38 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:33.084 [2024-11-20 11:22:38.677111] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:08:33.084 [2024-11-20 11:22:38.677869] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59415 ] 00:08:33.343 [2024-11-20 11:22:38.858868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.343 [2024-11-20 11:22:38.993598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.720 test_start 00:08:34.720 oneshot 00:08:34.720 tick 100 00:08:34.720 tick 100 00:08:34.720 tick 250 00:08:34.720 tick 100 00:08:34.720 tick 100 00:08:34.720 tick 100 00:08:34.720 tick 250 00:08:34.720 tick 500 00:08:34.720 tick 100 00:08:34.720 tick 100 00:08:34.720 tick 250 00:08:34.720 tick 100 00:08:34.720 tick 100 00:08:34.720 test_end 00:08:34.720 00:08:34.720 real 0m1.627s 00:08:34.720 user 0m1.405s 00:08:34.720 sys 0m0.111s 00:08:34.720 11:22:40 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.720 ************************************ 00:08:34.720 END TEST event_reactor 00:08:34.720 ************************************ 00:08:34.720 11:22:40 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:34.720 11:22:40 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:34.720 11:22:40 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:34.720 11:22:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.720 11:22:40 event -- common/autotest_common.sh@10 -- # set +x 00:08:34.720 ************************************ 00:08:34.720 START TEST event_reactor_perf 00:08:34.720 ************************************ 00:08:34.720 11:22:40 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:34.720 [2024-11-20 11:22:40.366401] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:08:34.720 [2024-11-20 11:22:40.366659] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59455 ] 00:08:34.979 [2024-11-20 11:22:40.570517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.237 [2024-11-20 11:22:40.764872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.614 test_start 00:08:36.614 test_end 00:08:36.614 Performance: 324896 events per second 00:08:36.614 00:08:36.614 real 0m1.702s 00:08:36.614 user 0m1.473s 00:08:36.614 sys 0m0.118s 00:08:36.614 11:22:42 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.614 11:22:42 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:36.614 ************************************ 00:08:36.614 END TEST event_reactor_perf 00:08:36.614 ************************************ 00:08:36.614 11:22:42 event -- event/event.sh@49 -- # uname -s 00:08:36.614 11:22:42 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:36.614 11:22:42 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:36.614 11:22:42 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:36.614 11:22:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.614 11:22:42 event -- common/autotest_common.sh@10 -- # set +x 00:08:36.614 ************************************ 00:08:36.614 START TEST event_scheduler 00:08:36.614 ************************************ 00:08:36.614 11:22:42 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:36.614 * Looking for test storage... 00:08:36.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:36.614 11:22:42 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:36.614 11:22:42 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:08:36.614 11:22:42 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:36.614 11:22:42 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:36.614 11:22:42 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.614 11:22:42 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.614 11:22:42 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.614 11:22:42 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.614 11:22:42 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.614 11:22:42 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.614 11:22:42 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.614 11:22:42 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.614 11:22:42 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.614 11:22:42 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.614 11:22:42 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.614 11:22:42 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:36.614 11:22:42 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:36.614 11:22:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.614 11:22:42 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.614 11:22:42 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:36.614 11:22:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:36.614 11:22:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.615 11:22:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:36.615 11:22:42 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.615 11:22:42 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:36.615 11:22:42 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:36.615 11:22:42 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.615 11:22:42 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:36.615 11:22:42 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.615 11:22:42 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.615 11:22:42 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.615 11:22:42 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:36.615 11:22:42 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.615 11:22:42 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:36.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.615 --rc genhtml_branch_coverage=1 00:08:36.615 --rc genhtml_function_coverage=1 00:08:36.615 --rc genhtml_legend=1 00:08:36.615 --rc geninfo_all_blocks=1 00:08:36.615 --rc geninfo_unexecuted_blocks=1 00:08:36.615 00:08:36.615 ' 00:08:36.615 11:22:42 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:36.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.615 --rc genhtml_branch_coverage=1 00:08:36.615 --rc genhtml_function_coverage=1 00:08:36.615 --rc genhtml_legend=1 00:08:36.615 --rc geninfo_all_blocks=1 00:08:36.615 --rc geninfo_unexecuted_blocks=1 00:08:36.615 00:08:36.615 ' 00:08:36.615 11:22:42 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:36.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.615 --rc genhtml_branch_coverage=1 00:08:36.615 --rc genhtml_function_coverage=1 00:08:36.615 --rc genhtml_legend=1 00:08:36.615 --rc geninfo_all_blocks=1 00:08:36.615 --rc geninfo_unexecuted_blocks=1 00:08:36.615 00:08:36.615 ' 00:08:36.615 11:22:42 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:36.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.615 --rc genhtml_branch_coverage=1 00:08:36.615 --rc genhtml_function_coverage=1 00:08:36.615 --rc genhtml_legend=1 00:08:36.615 --rc geninfo_all_blocks=1 00:08:36.615 --rc geninfo_unexecuted_blocks=1 00:08:36.615 00:08:36.615 ' 00:08:36.615 11:22:42 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:36.615 11:22:42 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59531 00:08:36.615 11:22:42 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:36.615 11:22:42 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:36.615 11:22:42 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59531 00:08:36.615 11:22:42 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59531 ']' 00:08:36.615 11:22:42 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.615 11:22:42 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.615 11:22:42 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.615 11:22:42 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.615 11:22:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:36.875 [2024-11-20 11:22:42.405410] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:08:36.875 [2024-11-20 11:22:42.406406] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59531 ] 00:08:36.875 [2024-11-20 11:22:42.611508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:37.133 [2024-11-20 11:22:42.794973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.133 [2024-11-20 11:22:42.795029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.133 [2024-11-20 11:22:42.795095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.133 [2024-11-20 11:22:42.795104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.701 11:22:43 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.701 11:22:43 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:37.701 11:22:43 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:37.701 11:22:43 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.701 11:22:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:37.701 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:37.701 POWER: Cannot set governor of lcore 0 to userspace 00:08:37.701 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:37.701 POWER: Cannot set governor of lcore 0 to performance 00:08:37.701 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:37.701 POWER: Cannot set governor of lcore 0 to userspace 00:08:37.701 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:37.701 POWER: Cannot set governor of lcore 0 to userspace 00:08:37.701 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:37.701 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:37.701 POWER: Unable to set Power Management Environment for lcore 0 00:08:37.701 [2024-11-20 11:22:43.377787] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:37.701 [2024-11-20 11:22:43.377815] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:37.701 [2024-11-20 11:22:43.377830] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:37.701 [2024-11-20 11:22:43.377861] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:37.701 [2024-11-20 11:22:43.377886] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:37.701 [2024-11-20 11:22:43.377901] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:37.701 11:22:43 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.701 11:22:43 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:37.701 11:22:43 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.701 11:22:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:38.269 [2024-11-20 11:22:43.760066] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:38.269 11:22:43 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.269 11:22:43 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:38.269 11:22:43 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.269 11:22:43 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.269 11:22:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:38.269 ************************************ 00:08:38.269 START TEST scheduler_create_thread 00:08:38.269 ************************************ 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:38.269 2 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:38.269 3 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:38.269 4 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:38.269 5 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:38.269 6 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:38.269 7 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:38.269 8 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:38.269 9 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:38.269 10 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:38.269 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.270 11:22:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:39.648 11:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.648 11:22:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:39.648 11:22:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:39.648 11:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.648 11:22:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:41.026 ************************************ 00:08:41.026 END TEST scheduler_create_thread 00:08:41.026 ************************************ 00:08:41.026 11:22:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.026 00:08:41.026 real 0m2.621s 00:08:41.026 user 0m0.017s 00:08:41.026 sys 0m0.007s 00:08:41.026 11:22:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.026 11:22:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:41.026 11:22:46 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:41.026 11:22:46 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59531 00:08:41.026 11:22:46 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59531 ']' 00:08:41.026 11:22:46 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59531 00:08:41.026 11:22:46 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:41.026 11:22:46 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.026 11:22:46 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59531 00:08:41.026 killing process with pid 59531 00:08:41.026 11:22:46 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:41.026 11:22:46 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:41.026 11:22:46 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59531' 00:08:41.026 11:22:46 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59531 00:08:41.026 11:22:46 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59531 00:08:41.284 [2024-11-20 11:22:46.875827] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:42.661 00:08:42.661 real 0m6.087s 00:08:42.661 user 0m10.398s 00:08:42.661 sys 0m0.572s 00:08:42.661 11:22:48 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.661 11:22:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:42.661 ************************************ 00:08:42.661 END TEST event_scheduler 00:08:42.661 ************************************ 00:08:42.661 11:22:48 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:42.661 11:22:48 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:42.661 11:22:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.661 11:22:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.661 11:22:48 event -- common/autotest_common.sh@10 -- # set +x 00:08:42.661 ************************************ 00:08:42.661 START TEST app_repeat 00:08:42.661 ************************************ 00:08:42.661 11:22:48 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:42.661 11:22:48 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.661 11:22:48 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:42.661 11:22:48 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:42.661 11:22:48 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:42.661 11:22:48 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:42.661 11:22:48 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:42.661 11:22:48 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:42.661 11:22:48 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59643 00:08:42.661 11:22:48 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:42.661 11:22:48 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59643' 00:08:42.661 Process app_repeat pid: 59643 00:08:42.661 spdk_app_start Round 0 00:08:42.661 11:22:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:42.661 11:22:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:42.661 11:22:48 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:42.661 11:22:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59643 /var/tmp/spdk-nbd.sock 00:08:42.661 11:22:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59643 ']' 00:08:42.661 11:22:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:42.661 11:22:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:42.661 11:22:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:42.661 11:22:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.661 11:22:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:42.661 [2024-11-20 11:22:48.306531] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:08:42.661 [2024-11-20 11:22:48.306714] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59643 ] 00:08:42.920 [2024-11-20 11:22:48.513843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:43.179 [2024-11-20 11:22:48.693522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.179 [2024-11-20 11:22:48.693552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.438 11:22:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.438 11:22:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:43.438 11:22:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:44.005 Malloc0 00:08:44.005 11:22:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:44.264 Malloc1 00:08:44.264 11:22:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:44.264 11:22:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.264 11:22:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:44.264 11:22:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:44.264 11:22:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:44.264 11:22:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:44.264 11:22:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:44.264 11:22:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.264 11:22:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:44.264 11:22:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:44.264 11:22:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:44.264 11:22:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:44.264 11:22:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:44.264 11:22:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:44.264 11:22:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:44.264 11:22:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:44.522 /dev/nbd0 00:08:44.522 11:22:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:44.522 11:22:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:44.522 11:22:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:44.522 11:22:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:44.522 11:22:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:44.522 11:22:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:44.522 11:22:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:44.522 11:22:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:44.522 11:22:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:44.522 11:22:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:44.522 11:22:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:44.522 1+0 records in 00:08:44.522 1+0 records out 00:08:44.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497514 s, 8.2 MB/s 00:08:44.522 11:22:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:44.522 11:22:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:44.522 11:22:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:44.522 11:22:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:44.522 11:22:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:44.522 11:22:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:44.522 11:22:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:44.522 11:22:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:45.089 /dev/nbd1 00:08:45.089 11:22:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:45.089 11:22:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:45.089 11:22:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:45.089 11:22:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:45.089 11:22:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:45.089 11:22:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:45.089 11:22:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:45.089 11:22:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:45.089 11:22:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:45.089 11:22:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:45.089 11:22:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:45.089 1+0 records in 00:08:45.089 1+0 records out 00:08:45.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339794 s, 12.1 MB/s 00:08:45.089 11:22:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:45.089 11:22:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:45.089 11:22:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:45.089 11:22:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:45.089 11:22:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:45.089 11:22:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:45.089 11:22:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:45.089 11:22:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:45.089 11:22:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.089 11:22:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:45.348 11:22:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:45.348 { 00:08:45.348 "nbd_device": "/dev/nbd0", 00:08:45.348 "bdev_name": "Malloc0" 00:08:45.348 }, 00:08:45.348 { 00:08:45.348 "nbd_device": "/dev/nbd1", 00:08:45.348 "bdev_name": "Malloc1" 00:08:45.348 } 00:08:45.348 ]' 00:08:45.348 11:22:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:45.348 11:22:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:45.348 { 00:08:45.348 "nbd_device": "/dev/nbd0", 00:08:45.348 "bdev_name": "Malloc0" 00:08:45.348 }, 00:08:45.348 { 00:08:45.348 "nbd_device": "/dev/nbd1", 00:08:45.348 "bdev_name": "Malloc1" 00:08:45.348 } 00:08:45.348 ]' 00:08:45.348 11:22:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:45.348 /dev/nbd1' 00:08:45.348 11:22:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:45.348 /dev/nbd1' 00:08:45.348 11:22:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:45.348 11:22:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:45.348 11:22:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:45.348 11:22:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:45.348 11:22:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:45.348 11:22:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:45.348 11:22:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:45.348 11:22:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:45.348 11:22:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:45.348 11:22:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:45.348 11:22:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:45.348 11:22:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:45.348 256+0 records in 00:08:45.348 256+0 records out 00:08:45.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00905005 s, 116 MB/s 00:08:45.348 11:22:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:45.348 11:22:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:45.348 256+0 records in 00:08:45.348 256+0 records out 00:08:45.348 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0234939 s, 44.6 MB/s 00:08:45.348 11:22:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:45.348 11:22:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:45.607 256+0 records in 00:08:45.607 256+0 records out 00:08:45.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0388227 s, 27.0 MB/s 00:08:45.607 11:22:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:45.607 11:22:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:45.607 11:22:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:45.607 11:22:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:45.607 11:22:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:45.607 11:22:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:45.607 11:22:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:45.607 11:22:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:45.607 11:22:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:45.607 11:22:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:45.607 11:22:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:45.607 11:22:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:45.607 11:22:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:45.607 11:22:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.607 11:22:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:45.607 11:22:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:45.607 11:22:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:45.607 11:22:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.607 11:22:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:45.865 11:22:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:45.865 11:22:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:45.865 11:22:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:45.865 11:22:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.865 11:22:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.865 11:22:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:45.865 11:22:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:45.865 11:22:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.865 11:22:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.865 11:22:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:46.124 11:22:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:46.124 11:22:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:46.124 11:22:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:46.124 11:22:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.124 11:22:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.124 11:22:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:46.124 11:22:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:46.124 11:22:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.124 11:22:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:46.124 11:22:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.124 11:22:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:46.383 11:22:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:46.383 11:22:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:46.383 11:22:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:46.383 11:22:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:46.383 11:22:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:46.383 11:22:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:46.383 11:22:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:46.383 11:22:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:46.383 11:22:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:46.383 11:22:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:46.383 11:22:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:46.383 11:22:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:46.383 11:22:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:46.949 11:22:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:48.324 [2024-11-20 11:22:53.862847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:48.324 [2024-11-20 11:22:53.997505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.324 [2024-11-20 11:22:53.997511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.582 [2024-11-20 11:22:54.223981] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:48.582 [2024-11-20 11:22:54.224089] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:49.957 spdk_app_start Round 1 00:08:49.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:49.957 11:22:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:49.957 11:22:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:49.957 11:22:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59643 /var/tmp/spdk-nbd.sock 00:08:49.957 11:22:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59643 ']' 00:08:49.957 11:22:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:49.957 11:22:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.957 11:22:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:49.957 11:22:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.957 11:22:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:50.216 11:22:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.216 11:22:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:50.216 11:22:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:50.783 Malloc0 00:08:50.783 11:22:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:51.040 Malloc1 00:08:51.040 11:22:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:51.040 11:22:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.040 11:22:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:51.040 11:22:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:51.040 11:22:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.040 11:22:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:51.040 11:22:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:51.040 11:22:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.040 11:22:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:51.040 11:22:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:51.040 11:22:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.040 11:22:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:51.040 11:22:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:51.040 11:22:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:51.040 11:22:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:51.040 11:22:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:51.298 /dev/nbd0 00:08:51.298 11:22:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:51.298 11:22:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:51.298 11:22:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:51.298 11:22:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:51.298 11:22:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:51.298 11:22:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:51.298 11:22:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:51.298 11:22:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:51.298 11:22:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:51.298 11:22:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:51.298 11:22:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:51.298 1+0 records in 00:08:51.298 1+0 records out 00:08:51.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214693 s, 19.1 MB/s 00:08:51.298 11:22:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:51.298 11:22:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:51.298 11:22:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:51.298 11:22:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:51.298 11:22:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:51.298 11:22:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:51.298 11:22:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:51.298 11:22:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:51.862 /dev/nbd1 00:08:51.863 11:22:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:51.863 11:22:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:51.863 11:22:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:51.863 11:22:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:51.863 11:22:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:51.863 11:22:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:51.863 11:22:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:51.863 11:22:57 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:51.863 11:22:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:51.863 11:22:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:51.863 11:22:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:51.863 1+0 records in 00:08:51.863 1+0 records out 00:08:51.863 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397533 s, 10.3 MB/s 00:08:51.863 11:22:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:51.863 11:22:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:51.863 11:22:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:51.863 11:22:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:51.863 11:22:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:51.863 11:22:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:51.863 11:22:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:51.863 11:22:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:51.863 11:22:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.863 11:22:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:52.120 { 00:08:52.120 "nbd_device": "/dev/nbd0", 00:08:52.120 "bdev_name": "Malloc0" 00:08:52.120 }, 00:08:52.120 { 00:08:52.120 "nbd_device": "/dev/nbd1", 00:08:52.120 "bdev_name": "Malloc1" 00:08:52.120 } 00:08:52.120 ]' 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:52.120 { 00:08:52.120 "nbd_device": "/dev/nbd0", 00:08:52.120 "bdev_name": "Malloc0" 00:08:52.120 }, 00:08:52.120 { 00:08:52.120 "nbd_device": "/dev/nbd1", 00:08:52.120 "bdev_name": "Malloc1" 00:08:52.120 } 00:08:52.120 ]' 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:52.120 /dev/nbd1' 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:52.120 /dev/nbd1' 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:52.120 256+0 records in 00:08:52.120 256+0 records out 00:08:52.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00691686 s, 152 MB/s 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:52.120 256+0 records in 00:08:52.120 256+0 records out 00:08:52.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0321503 s, 32.6 MB/s 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:52.120 256+0 records in 00:08:52.120 256+0 records out 00:08:52.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0342485 s, 30.6 MB/s 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:52.120 11:22:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:52.379 11:22:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:52.379 11:22:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.379 11:22:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:52.379 11:22:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:52.379 11:22:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:52.379 11:22:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.379 11:22:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:52.637 11:22:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:52.637 11:22:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:52.637 11:22:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:52.637 11:22:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.637 11:22:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.637 11:22:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:52.637 11:22:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:52.637 11:22:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.637 11:22:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.637 11:22:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:52.896 11:22:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:52.896 11:22:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:52.896 11:22:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:52.896 11:22:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.896 11:22:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.896 11:22:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:52.896 11:22:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:52.896 11:22:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.896 11:22:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:52.896 11:22:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.896 11:22:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:53.154 11:22:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:53.154 11:22:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:53.154 11:22:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:53.154 11:22:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:53.154 11:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:53.154 11:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:53.154 11:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:53.154 11:22:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:53.154 11:22:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:53.154 11:22:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:53.154 11:22:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:53.154 11:22:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:53.155 11:22:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:53.721 11:22:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:55.201 [2024-11-20 11:23:00.572440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:55.201 [2024-11-20 11:23:00.697215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.201 [2024-11-20 11:23:00.697229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.201 [2024-11-20 11:23:00.916264] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:55.201 [2024-11-20 11:23:00.916372] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:56.572 spdk_app_start Round 2 00:08:56.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:56.572 11:23:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:56.572 11:23:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:56.572 11:23:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59643 /var/tmp/spdk-nbd.sock 00:08:56.572 11:23:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59643 ']' 00:08:56.572 11:23:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:56.572 11:23:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.572 11:23:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:56.572 11:23:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.572 11:23:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:57.139 11:23:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.139 11:23:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:57.139 11:23:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:57.397 Malloc0 00:08:57.397 11:23:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:57.656 Malloc1 00:08:57.656 11:23:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:57.656 11:23:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.656 11:23:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:57.656 11:23:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:57.656 11:23:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.656 11:23:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:57.656 11:23:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:57.656 11:23:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.656 11:23:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:57.656 11:23:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:57.656 11:23:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.656 11:23:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:57.656 11:23:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:57.656 11:23:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:57.656 11:23:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:57.656 11:23:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:57.972 /dev/nbd0 00:08:57.972 11:23:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:57.972 11:23:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:57.972 11:23:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:57.972 11:23:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:57.972 11:23:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:57.973 11:23:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:57.973 11:23:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:57.973 11:23:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:57.973 11:23:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:57.973 11:23:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:57.973 11:23:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:57.973 1+0 records in 00:08:57.973 1+0 records out 00:08:57.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439162 s, 9.3 MB/s 00:08:57.973 11:23:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.973 11:23:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:57.973 11:23:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.973 11:23:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:57.973 11:23:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:57.973 11:23:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:57.973 11:23:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:57.973 11:23:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:58.231 /dev/nbd1 00:08:58.489 11:23:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:58.489 11:23:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:58.489 11:23:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:58.489 11:23:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:58.489 11:23:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:58.489 11:23:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:58.489 11:23:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:58.489 11:23:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:58.489 11:23:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:58.489 11:23:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:58.489 11:23:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:58.489 1+0 records in 00:08:58.489 1+0 records out 00:08:58.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324169 s, 12.6 MB/s 00:08:58.489 11:23:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:58.489 11:23:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:58.489 11:23:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:58.489 11:23:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:58.489 11:23:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:58.489 11:23:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:58.489 11:23:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:58.489 11:23:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:58.489 11:23:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.489 11:23:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:58.748 { 00:08:58.748 "nbd_device": "/dev/nbd0", 00:08:58.748 "bdev_name": "Malloc0" 00:08:58.748 }, 00:08:58.748 { 00:08:58.748 "nbd_device": "/dev/nbd1", 00:08:58.748 "bdev_name": "Malloc1" 00:08:58.748 } 00:08:58.748 ]' 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:58.748 { 00:08:58.748 "nbd_device": "/dev/nbd0", 00:08:58.748 "bdev_name": "Malloc0" 00:08:58.748 }, 00:08:58.748 { 00:08:58.748 "nbd_device": "/dev/nbd1", 00:08:58.748 "bdev_name": "Malloc1" 00:08:58.748 } 00:08:58.748 ]' 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:58.748 /dev/nbd1' 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:58.748 /dev/nbd1' 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:58.748 256+0 records in 00:08:58.748 256+0 records out 00:08:58.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010047 s, 104 MB/s 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:58.748 256+0 records in 00:08:58.748 256+0 records out 00:08:58.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200651 s, 52.3 MB/s 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:58.748 256+0 records in 00:08:58.748 256+0 records out 00:08:58.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0352226 s, 29.8 MB/s 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.748 11:23:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:59.315 11:23:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:59.315 11:23:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:59.315 11:23:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:59.315 11:23:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.315 11:23:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.315 11:23:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:59.315 11:23:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:59.315 11:23:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.315 11:23:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.315 11:23:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:59.573 11:23:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:59.573 11:23:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:59.573 11:23:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:59.573 11:23:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.573 11:23:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.573 11:23:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:59.573 11:23:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:59.573 11:23:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.573 11:23:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:59.573 11:23:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.573 11:23:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:59.831 11:23:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:59.831 11:23:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:59.831 11:23:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:59.831 11:23:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:59.831 11:23:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:59.831 11:23:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:59.831 11:23:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:59.831 11:23:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:59.831 11:23:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:59.831 11:23:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:59.831 11:23:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:59.831 11:23:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:59.831 11:23:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:00.398 11:23:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:01.860 [2024-11-20 11:23:07.315163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:01.860 [2024-11-20 11:23:07.445579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.860 [2024-11-20 11:23:07.445580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.118 [2024-11-20 11:23:07.664417] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:02.118 [2024-11-20 11:23:07.664533] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:03.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:03.493 11:23:09 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59643 /var/tmp/spdk-nbd.sock 00:09:03.493 11:23:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59643 ']' 00:09:03.493 11:23:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:03.493 11:23:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.493 11:23:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:03.493 11:23:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.493 11:23:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:03.752 11:23:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.752 11:23:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:03.752 11:23:09 event.app_repeat -- event/event.sh@39 -- # killprocess 59643 00:09:03.752 11:23:09 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59643 ']' 00:09:03.752 11:23:09 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59643 00:09:03.752 11:23:09 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:03.752 11:23:09 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.752 11:23:09 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59643 00:09:03.752 killing process with pid 59643 00:09:03.752 11:23:09 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.752 11:23:09 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.752 11:23:09 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59643' 00:09:03.752 11:23:09 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59643 00:09:03.752 11:23:09 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59643 00:09:05.133 spdk_app_start is called in Round 0. 00:09:05.133 Shutdown signal received, stop current app iteration 00:09:05.133 Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 reinitialization... 00:09:05.133 spdk_app_start is called in Round 1. 00:09:05.133 Shutdown signal received, stop current app iteration 00:09:05.133 Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 reinitialization... 00:09:05.133 spdk_app_start is called in Round 2. 00:09:05.133 Shutdown signal received, stop current app iteration 00:09:05.133 Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 reinitialization... 00:09:05.133 spdk_app_start is called in Round 3. 00:09:05.133 Shutdown signal received, stop current app iteration 00:09:05.133 11:23:10 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:05.133 11:23:10 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:05.133 00:09:05.133 real 0m22.298s 00:09:05.133 user 0m48.631s 00:09:05.133 sys 0m3.824s 00:09:05.133 ************************************ 00:09:05.133 END TEST app_repeat 00:09:05.133 ************************************ 00:09:05.133 11:23:10 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.133 11:23:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:05.133 11:23:10 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:05.133 11:23:10 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:05.133 11:23:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.133 11:23:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.133 11:23:10 event -- common/autotest_common.sh@10 -- # set +x 00:09:05.133 ************************************ 00:09:05.133 START TEST cpu_locks 00:09:05.133 ************************************ 00:09:05.133 11:23:10 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:05.133 * Looking for test storage... 00:09:05.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:05.133 11:23:10 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:05.133 11:23:10 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:09:05.133 11:23:10 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:05.133 11:23:10 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.133 11:23:10 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:05.133 11:23:10 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.133 11:23:10 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:05.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.133 --rc genhtml_branch_coverage=1 00:09:05.133 --rc genhtml_function_coverage=1 00:09:05.133 --rc genhtml_legend=1 00:09:05.133 --rc geninfo_all_blocks=1 00:09:05.133 --rc geninfo_unexecuted_blocks=1 00:09:05.133 00:09:05.133 ' 00:09:05.133 11:23:10 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:05.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.133 --rc genhtml_branch_coverage=1 00:09:05.133 --rc genhtml_function_coverage=1 00:09:05.133 --rc genhtml_legend=1 00:09:05.133 --rc geninfo_all_blocks=1 00:09:05.133 --rc geninfo_unexecuted_blocks=1 00:09:05.133 00:09:05.133 ' 00:09:05.133 11:23:10 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:05.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.133 --rc genhtml_branch_coverage=1 00:09:05.133 --rc genhtml_function_coverage=1 00:09:05.133 --rc genhtml_legend=1 00:09:05.133 --rc geninfo_all_blocks=1 00:09:05.133 --rc geninfo_unexecuted_blocks=1 00:09:05.133 00:09:05.133 ' 00:09:05.133 11:23:10 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:05.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.133 --rc genhtml_branch_coverage=1 00:09:05.133 --rc genhtml_function_coverage=1 00:09:05.133 --rc genhtml_legend=1 00:09:05.133 --rc geninfo_all_blocks=1 00:09:05.133 --rc geninfo_unexecuted_blocks=1 00:09:05.133 00:09:05.133 ' 00:09:05.133 11:23:10 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:05.133 11:23:10 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:05.133 11:23:10 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:05.133 11:23:10 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:05.133 11:23:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.133 11:23:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.133 11:23:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:05.133 ************************************ 00:09:05.134 START TEST default_locks 00:09:05.134 ************************************ 00:09:05.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.134 11:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:05.134 11:23:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60123 00:09:05.134 11:23:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60123 00:09:05.134 11:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60123 ']' 00:09:05.134 11:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.134 11:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.134 11:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.134 11:23:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:05.134 11:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.134 11:23:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:05.392 [2024-11-20 11:23:10.942606] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:09:05.392 [2024-11-20 11:23:10.942785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60123 ] 00:09:05.392 [2024-11-20 11:23:11.144370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.650 [2024-11-20 11:23:11.315566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.586 11:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.586 11:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:06.586 11:23:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60123 00:09:06.586 11:23:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60123 00:09:06.586 11:23:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:07.153 11:23:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60123 00:09:07.153 11:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60123 ']' 00:09:07.153 11:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60123 00:09:07.153 11:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:07.153 11:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.153 11:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60123 00:09:07.153 killing process with pid 60123 00:09:07.153 11:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.153 11:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.153 11:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60123' 00:09:07.153 11:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60123 00:09:07.153 11:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60123 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60123 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60123 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:10.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.441 ERROR: process (pid: 60123) is no longer running 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60123 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60123 ']' 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:10.441 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60123) - No such process 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:10.441 ************************************ 00:09:10.441 END TEST default_locks 00:09:10.441 ************************************ 00:09:10.441 00:09:10.441 real 0m4.676s 00:09:10.441 user 0m4.660s 00:09:10.441 sys 0m0.728s 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.441 11:23:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:10.441 11:23:15 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:10.441 11:23:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:10.441 11:23:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.441 11:23:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:10.441 ************************************ 00:09:10.441 START TEST default_locks_via_rpc 00:09:10.441 ************************************ 00:09:10.441 11:23:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:10.441 11:23:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:10.441 11:23:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60209 00:09:10.441 11:23:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60209 00:09:10.441 11:23:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60209 ']' 00:09:10.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.441 11:23:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.441 11:23:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.441 11:23:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.441 11:23:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.441 11:23:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.441 [2024-11-20 11:23:15.670050] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:09:10.441 [2024-11-20 11:23:15.670227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60209 ] 00:09:10.441 [2024-11-20 11:23:15.872263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.441 [2024-11-20 11:23:16.008701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.375 11:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.375 11:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:11.375 11:23:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:11.375 11:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.375 11:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.375 11:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.375 11:23:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:11.375 11:23:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:11.375 11:23:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:11.375 11:23:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:11.375 11:23:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:11.375 11:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.375 11:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.375 11:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.375 11:23:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60209 00:09:11.375 11:23:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60209 00:09:11.375 11:23:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:11.942 11:23:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60209 00:09:11.942 11:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60209 ']' 00:09:11.942 11:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60209 00:09:11.942 11:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:11.942 11:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.942 11:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60209 00:09:11.942 killing process with pid 60209 00:09:11.942 11:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.942 11:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.942 11:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60209' 00:09:11.942 11:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60209 00:09:11.942 11:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60209 00:09:15.225 00:09:15.225 real 0m4.948s 00:09:15.225 user 0m4.911s 00:09:15.225 sys 0m0.762s 00:09:15.225 11:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.225 ************************************ 00:09:15.225 END TEST default_locks_via_rpc 00:09:15.225 ************************************ 00:09:15.225 11:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.225 11:23:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:15.225 11:23:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.225 11:23:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.225 11:23:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:15.225 ************************************ 00:09:15.225 START TEST non_locking_app_on_locked_coremask 00:09:15.225 ************************************ 00:09:15.225 11:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:15.225 11:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60290 00:09:15.225 11:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60290 /var/tmp/spdk.sock 00:09:15.225 11:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60290 ']' 00:09:15.225 11:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:15.225 11:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.225 11:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.225 11:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.225 11:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.225 11:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:15.225 [2024-11-20 11:23:20.703063] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:09:15.225 [2024-11-20 11:23:20.703566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60290 ] 00:09:15.225 [2024-11-20 11:23:20.915103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.482 [2024-11-20 11:23:21.111459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:16.855 11:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.855 11:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:16.855 11:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60316 00:09:16.855 11:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:16.855 11:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60316 /var/tmp/spdk2.sock 00:09:16.855 11:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60316 ']' 00:09:16.855 11:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:16.855 11:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.855 11:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:16.855 11:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.855 11:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:16.855 [2024-11-20 11:23:22.390372] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:09:16.855 [2024-11-20 11:23:22.390852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60316 ] 00:09:16.855 [2024-11-20 11:23:22.593783] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:16.855 [2024-11-20 11:23:22.593919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.421 [2024-11-20 11:23:22.919526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.945 11:23:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.945 11:23:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:19.945 11:23:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60290 00:09:19.945 11:23:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:19.945 11:23:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60290 00:09:20.512 11:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60290 00:09:20.512 11:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60290 ']' 00:09:20.512 11:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60290 00:09:20.512 11:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:20.512 11:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.512 11:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60290 00:09:20.512 killing process with pid 60290 00:09:20.512 11:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.512 11:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.512 11:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60290' 00:09:20.512 11:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60290 00:09:20.512 11:23:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60290 00:09:27.089 11:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60316 00:09:27.089 11:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60316 ']' 00:09:27.090 11:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60316 00:09:27.090 11:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:27.090 11:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.090 11:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60316 00:09:27.090 killing process with pid 60316 00:09:27.090 11:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.090 11:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.090 11:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60316' 00:09:27.090 11:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60316 00:09:27.090 11:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60316 00:09:29.645 00:09:29.645 real 0m14.539s 00:09:29.645 user 0m14.745s 00:09:29.645 sys 0m2.014s 00:09:29.645 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.645 11:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:29.645 ************************************ 00:09:29.645 END TEST non_locking_app_on_locked_coremask 00:09:29.645 ************************************ 00:09:29.645 11:23:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:29.645 11:23:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.645 11:23:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.645 11:23:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:29.645 ************************************ 00:09:29.645 START TEST locking_app_on_unlocked_coremask 00:09:29.645 ************************************ 00:09:29.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.645 11:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:29.646 11:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60486 00:09:29.646 11:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60486 /var/tmp/spdk.sock 00:09:29.646 11:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60486 ']' 00:09:29.646 11:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:29.646 11:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.646 11:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.646 11:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.646 11:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.646 11:23:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:29.646 [2024-11-20 11:23:35.286698] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:09:29.646 [2024-11-20 11:23:35.287190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60486 ] 00:09:29.927 [2024-11-20 11:23:35.487843] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:29.927 [2024-11-20 11:23:35.488147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.927 [2024-11-20 11:23:35.639929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.295 11:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.295 11:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:31.295 11:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60513 00:09:31.295 11:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:31.295 11:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60513 /var/tmp/spdk2.sock 00:09:31.295 11:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60513 ']' 00:09:31.296 11:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:31.296 11:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.296 11:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:31.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:31.296 11:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.296 11:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:31.296 [2024-11-20 11:23:36.867841] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:09:31.296 [2024-11-20 11:23:36.868067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60513 ] 00:09:31.553 [2024-11-20 11:23:37.091851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.810 [2024-11-20 11:23:37.420767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.338 11:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.338 11:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:34.338 11:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60513 00:09:34.338 11:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60513 00:09:34.338 11:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:35.275 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60486 00:09:35.275 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60486 ']' 00:09:35.275 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60486 00:09:35.275 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:35.275 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.275 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60486 00:09:35.275 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.275 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.275 killing process with pid 60486 00:09:35.275 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60486' 00:09:35.275 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60486 00:09:35.275 11:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60486 00:09:40.543 11:23:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60513 00:09:40.543 11:23:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60513 ']' 00:09:40.543 11:23:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60513 00:09:40.544 11:23:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:40.544 11:23:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.544 11:23:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60513 00:09:40.544 killing process with pid 60513 00:09:40.544 11:23:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.544 11:23:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.544 11:23:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60513' 00:09:40.544 11:23:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60513 00:09:40.544 11:23:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60513 00:09:43.841 00:09:43.841 real 0m13.914s 00:09:43.841 user 0m14.870s 00:09:43.841 sys 0m1.743s 00:09:43.841 11:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.841 11:23:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:43.841 ************************************ 00:09:43.841 END TEST locking_app_on_unlocked_coremask 00:09:43.841 ************************************ 00:09:43.841 11:23:49 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:43.841 11:23:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.841 11:23:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.841 11:23:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:43.841 ************************************ 00:09:43.841 START TEST locking_app_on_locked_coremask 00:09:43.841 ************************************ 00:09:43.841 11:23:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:43.841 11:23:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60682 00:09:43.841 11:23:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:43.841 11:23:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60682 /var/tmp/spdk.sock 00:09:43.841 11:23:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60682 ']' 00:09:43.841 11:23:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.841 11:23:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.841 11:23:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.841 11:23:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.841 11:23:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:43.841 [2024-11-20 11:23:49.234695] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:09:43.841 [2024-11-20 11:23:49.234842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60682 ] 00:09:43.841 [2024-11-20 11:23:49.421339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.841 [2024-11-20 11:23:49.595642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.219 11:23:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.219 11:23:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:45.219 11:23:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60699 00:09:45.219 11:23:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60699 /var/tmp/spdk2.sock 00:09:45.219 11:23:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:45.219 11:23:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:45.219 11:23:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60699 /var/tmp/spdk2.sock 00:09:45.219 11:23:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:45.219 11:23:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.219 11:23:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:45.219 11:23:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.219 11:23:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60699 /var/tmp/spdk2.sock 00:09:45.219 11:23:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60699 ']' 00:09:45.219 11:23:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:45.219 11:23:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:45.219 11:23:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:45.219 11:23:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.219 11:23:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:45.219 [2024-11-20 11:23:50.829982] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:09:45.219 [2024-11-20 11:23:50.830163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60699 ] 00:09:45.478 [2024-11-20 11:23:51.023634] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60682 has claimed it. 00:09:45.478 [2024-11-20 11:23:51.023767] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:46.046 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60699) - No such process 00:09:46.047 ERROR: process (pid: 60699) is no longer running 00:09:46.047 11:23:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.047 11:23:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:46.047 11:23:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:46.047 11:23:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:46.047 11:23:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:46.047 11:23:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:46.047 11:23:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60682 00:09:46.047 11:23:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60682 00:09:46.047 11:23:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:46.614 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60682 00:09:46.614 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60682 ']' 00:09:46.614 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60682 00:09:46.614 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:46.614 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.614 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60682 00:09:46.614 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.614 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.614 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60682' 00:09:46.614 killing process with pid 60682 00:09:46.614 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60682 00:09:46.614 11:23:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60682 00:09:49.147 00:09:49.147 real 0m5.745s 00:09:49.147 user 0m6.262s 00:09:49.147 sys 0m1.072s 00:09:49.147 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.147 11:23:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:49.147 ************************************ 00:09:49.147 END TEST locking_app_on_locked_coremask 00:09:49.147 ************************************ 00:09:49.147 11:23:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:49.147 11:23:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:49.147 11:23:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.147 11:23:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:49.147 ************************************ 00:09:49.147 START TEST locking_overlapped_coremask 00:09:49.147 ************************************ 00:09:49.147 11:23:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:49.147 11:23:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60774 00:09:49.147 11:23:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60774 /var/tmp/spdk.sock 00:09:49.147 11:23:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:49.147 11:23:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60774 ']' 00:09:49.147 11:23:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.147 11:23:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.148 11:23:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.148 11:23:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.148 11:23:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:49.405 [2024-11-20 11:23:55.025364] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:09:49.405 [2024-11-20 11:23:55.025576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60774 ] 00:09:49.663 [2024-11-20 11:23:55.216565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:49.663 [2024-11-20 11:23:55.400156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.663 [2024-11-20 11:23:55.400237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.663 [2024-11-20 11:23:55.400240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.037 11:23:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.037 11:23:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:51.037 11:23:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60803 00:09:51.037 11:23:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60803 /var/tmp/spdk2.sock 00:09:51.037 11:23:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:51.037 11:23:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:51.037 11:23:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60803 /var/tmp/spdk2.sock 00:09:51.037 11:23:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:51.037 11:23:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.037 11:23:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:51.037 11:23:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.037 11:23:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60803 /var/tmp/spdk2.sock 00:09:51.037 11:23:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60803 ']' 00:09:51.037 11:23:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:51.037 11:23:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:51.037 11:23:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:51.037 11:23:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.037 11:23:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:51.037 [2024-11-20 11:23:56.703731] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:09:51.037 [2024-11-20 11:23:56.703933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60803 ] 00:09:51.295 [2024-11-20 11:23:56.921959] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60774 has claimed it. 00:09:51.295 [2024-11-20 11:23:56.925523] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:51.862 ERROR: process (pid: 60803) is no longer running 00:09:51.862 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60803) - No such process 00:09:51.862 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.862 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:51.862 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:51.862 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:51.862 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:51.862 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:51.862 11:23:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:51.862 11:23:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:51.862 11:23:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:51.862 11:23:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:51.862 11:23:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60774 00:09:51.862 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60774 ']' 00:09:51.862 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60774 00:09:51.863 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:51.863 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.863 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60774 00:09:51.863 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.863 killing process with pid 60774 00:09:51.863 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.863 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60774' 00:09:51.863 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60774 00:09:51.863 11:23:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60774 00:09:55.149 00:09:55.149 real 0m5.440s 00:09:55.149 user 0m15.142s 00:09:55.149 sys 0m0.713s 00:09:55.149 11:24:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.149 11:24:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:55.149 ************************************ 00:09:55.149 END TEST locking_overlapped_coremask 00:09:55.149 ************************************ 00:09:55.149 11:24:00 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:55.149 11:24:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:55.149 11:24:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.149 11:24:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:55.149 ************************************ 00:09:55.149 START TEST locking_overlapped_coremask_via_rpc 00:09:55.149 ************************************ 00:09:55.149 11:24:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:55.149 11:24:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60873 00:09:55.149 11:24:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60873 /var/tmp/spdk.sock 00:09:55.149 11:24:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60873 ']' 00:09:55.149 11:24:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:55.149 11:24:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.149 11:24:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.149 11:24:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.149 11:24:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.149 11:24:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.149 [2024-11-20 11:24:00.509241] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:09:55.149 [2024-11-20 11:24:00.510321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60873 ] 00:09:55.149 [2024-11-20 11:24:00.715373] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:55.149 [2024-11-20 11:24:00.715448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:55.149 [2024-11-20 11:24:00.861822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.149 [2024-11-20 11:24:00.861939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.149 [2024-11-20 11:24:00.861944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.520 11:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.520 11:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:56.520 11:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60896 00:09:56.520 11:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60896 /var/tmp/spdk2.sock 00:09:56.520 11:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60896 ']' 00:09:56.520 11:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:56.520 11:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:56.520 11:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:56.520 11:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:56.520 11:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.520 11:24:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.520 [2024-11-20 11:24:02.071426] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:09:56.520 [2024-11-20 11:24:02.071653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60896 ] 00:09:56.778 [2024-11-20 11:24:02.299093] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:56.778 [2024-11-20 11:24:02.299324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:57.036 [2024-11-20 11:24:02.598529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.036 [2024-11-20 11:24:02.598585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.036 [2024-11-20 11:24:02.598593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.565 [2024-11-20 11:24:04.766752] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60873 has claimed it. 00:09:59.565 request: 00:09:59.565 { 00:09:59.565 "method": "framework_enable_cpumask_locks", 00:09:59.565 "req_id": 1 00:09:59.565 } 00:09:59.565 Got JSON-RPC error response 00:09:59.565 response: 00:09:59.565 { 00:09:59.565 "code": -32603, 00:09:59.565 "message": "Failed to claim CPU core: 2" 00:09:59.565 } 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60873 /var/tmp/spdk.sock 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60873 ']' 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.565 11:24:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.565 11:24:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.565 11:24:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:59.565 11:24:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60896 /var/tmp/spdk2.sock 00:09:59.565 11:24:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60896 ']' 00:09:59.565 11:24:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:59.565 11:24:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.565 11:24:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:59.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:59.565 11:24:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.565 11:24:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.878 11:24:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.878 11:24:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:59.878 11:24:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:59.878 11:24:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:59.878 11:24:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:59.878 11:24:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:59.878 00:09:59.878 real 0m5.078s 00:09:59.878 user 0m1.874s 00:09:59.878 sys 0m0.251s 00:09:59.878 ************************************ 00:09:59.878 END TEST locking_overlapped_coremask_via_rpc 00:09:59.878 11:24:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.878 11:24:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.878 ************************************ 00:09:59.878 11:24:05 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:59.878 11:24:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60873 ]] 00:09:59.878 11:24:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60873 00:09:59.878 11:24:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60873 ']' 00:09:59.878 11:24:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60873 00:09:59.878 11:24:05 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:59.878 11:24:05 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.878 11:24:05 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60873 00:09:59.878 killing process with pid 60873 00:09:59.878 11:24:05 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.878 11:24:05 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.878 11:24:05 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60873' 00:09:59.878 11:24:05 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60873 00:09:59.878 11:24:05 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60873 00:10:03.158 11:24:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60896 ]] 00:10:03.158 11:24:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60896 00:10:03.158 11:24:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60896 ']' 00:10:03.158 11:24:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60896 00:10:03.158 11:24:08 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:03.158 11:24:08 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.158 11:24:08 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60896 00:10:03.158 killing process with pid 60896 00:10:03.158 11:24:08 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:03.158 11:24:08 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:03.158 11:24:08 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60896' 00:10:03.158 11:24:08 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60896 00:10:03.158 11:24:08 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60896 00:10:05.724 11:24:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:05.724 11:24:11 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:05.724 11:24:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60873 ]] 00:10:05.724 11:24:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60873 00:10:05.724 11:24:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60873 ']' 00:10:05.724 11:24:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60873 00:10:05.724 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60873) - No such process 00:10:05.724 Process with pid 60873 is not found 00:10:05.724 11:24:11 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60873 is not found' 00:10:05.724 11:24:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60896 ]] 00:10:05.724 11:24:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60896 00:10:05.724 11:24:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60896 ']' 00:10:05.724 11:24:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60896 00:10:05.724 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60896) - No such process 00:10:05.724 Process with pid 60896 is not found 00:10:05.724 11:24:11 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60896 is not found' 00:10:05.724 11:24:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:05.724 00:10:05.724 real 1m0.607s 00:10:05.724 user 1m43.984s 00:10:05.724 sys 0m8.477s 00:10:05.724 11:24:11 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.724 11:24:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:05.724 ************************************ 00:10:05.724 END TEST cpu_locks 00:10:05.724 ************************************ 00:10:05.724 00:10:05.724 real 1m34.584s 00:10:05.724 user 2m50.557s 00:10:05.724 sys 0m13.573s 00:10:05.725 11:24:11 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.725 11:24:11 event -- common/autotest_common.sh@10 -- # set +x 00:10:05.725 ************************************ 00:10:05.725 END TEST event 00:10:05.725 ************************************ 00:10:05.725 11:24:11 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:05.725 11:24:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:05.725 11:24:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.725 11:24:11 -- common/autotest_common.sh@10 -- # set +x 00:10:05.725 ************************************ 00:10:05.725 START TEST thread 00:10:05.725 ************************************ 00:10:05.725 11:24:11 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:05.725 * Looking for test storage... 00:10:05.725 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:05.725 11:24:11 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:05.725 11:24:11 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:10:05.725 11:24:11 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:05.725 11:24:11 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:05.725 11:24:11 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.725 11:24:11 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.725 11:24:11 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.725 11:24:11 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.725 11:24:11 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.725 11:24:11 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.725 11:24:11 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.725 11:24:11 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.725 11:24:11 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.725 11:24:11 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.725 11:24:11 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.725 11:24:11 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:05.725 11:24:11 thread -- scripts/common.sh@345 -- # : 1 00:10:05.725 11:24:11 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.725 11:24:11 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.725 11:24:11 thread -- scripts/common.sh@365 -- # decimal 1 00:10:05.725 11:24:11 thread -- scripts/common.sh@353 -- # local d=1 00:10:05.725 11:24:11 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.725 11:24:11 thread -- scripts/common.sh@355 -- # echo 1 00:10:05.725 11:24:11 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.725 11:24:11 thread -- scripts/common.sh@366 -- # decimal 2 00:10:05.725 11:24:11 thread -- scripts/common.sh@353 -- # local d=2 00:10:05.725 11:24:11 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.725 11:24:11 thread -- scripts/common.sh@355 -- # echo 2 00:10:05.725 11:24:11 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.725 11:24:11 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.725 11:24:11 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.725 11:24:11 thread -- scripts/common.sh@368 -- # return 0 00:10:05.725 11:24:11 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.725 11:24:11 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:05.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.725 --rc genhtml_branch_coverage=1 00:10:05.725 --rc genhtml_function_coverage=1 00:10:05.725 --rc genhtml_legend=1 00:10:05.725 --rc geninfo_all_blocks=1 00:10:05.725 --rc geninfo_unexecuted_blocks=1 00:10:05.725 00:10:05.725 ' 00:10:05.725 11:24:11 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:05.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.725 --rc genhtml_branch_coverage=1 00:10:05.725 --rc genhtml_function_coverage=1 00:10:05.725 --rc genhtml_legend=1 00:10:05.725 --rc geninfo_all_blocks=1 00:10:05.725 --rc geninfo_unexecuted_blocks=1 00:10:05.725 00:10:05.725 ' 00:10:05.725 11:24:11 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:05.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.725 --rc genhtml_branch_coverage=1 00:10:05.725 --rc genhtml_function_coverage=1 00:10:05.725 --rc genhtml_legend=1 00:10:05.725 --rc geninfo_all_blocks=1 00:10:05.725 --rc geninfo_unexecuted_blocks=1 00:10:05.725 00:10:05.725 ' 00:10:05.725 11:24:11 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:05.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.725 --rc genhtml_branch_coverage=1 00:10:05.725 --rc genhtml_function_coverage=1 00:10:05.725 --rc genhtml_legend=1 00:10:05.725 --rc geninfo_all_blocks=1 00:10:05.725 --rc geninfo_unexecuted_blocks=1 00:10:05.725 00:10:05.725 ' 00:10:05.725 11:24:11 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:05.725 11:24:11 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:05.725 11:24:11 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.725 11:24:11 thread -- common/autotest_common.sh@10 -- # set +x 00:10:05.725 ************************************ 00:10:05.725 START TEST thread_poller_perf 00:10:05.725 ************************************ 00:10:05.725 11:24:11 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:05.983 [2024-11-20 11:24:11.533897] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:10:05.983 [2024-11-20 11:24:11.534062] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61108 ] 00:10:05.983 [2024-11-20 11:24:11.736145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.242 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:06.242 [2024-11-20 11:24:11.899150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.620 [2024-11-20T11:24:13.382Z] ====================================== 00:10:07.620 [2024-11-20T11:24:13.382Z] busy:2111204110 (cyc) 00:10:07.620 [2024-11-20T11:24:13.382Z] total_run_count: 342000 00:10:07.620 [2024-11-20T11:24:13.382Z] tsc_hz: 2100000000 (cyc) 00:10:07.620 [2024-11-20T11:24:13.382Z] ====================================== 00:10:07.620 [2024-11-20T11:24:13.382Z] poller_cost: 6173 (cyc), 2939 (nsec) 00:10:07.620 00:10:07.620 real 0m1.682s 00:10:07.620 user 0m1.453s 00:10:07.620 sys 0m0.119s 00:10:07.620 11:24:13 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.620 ************************************ 00:10:07.620 END TEST thread_poller_perf 00:10:07.620 ************************************ 00:10:07.620 11:24:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:07.620 11:24:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:07.620 11:24:13 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:07.620 11:24:13 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.620 11:24:13 thread -- common/autotest_common.sh@10 -- # set +x 00:10:07.620 ************************************ 00:10:07.620 START TEST thread_poller_perf 00:10:07.620 ************************************ 00:10:07.620 11:24:13 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:07.620 [2024-11-20 11:24:13.285190] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:10:07.620 [2024-11-20 11:24:13.285382] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61139 ] 00:10:07.896 [2024-11-20 11:24:13.485919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.896 [2024-11-20 11:24:13.653912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.896 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:09.272 [2024-11-20T11:24:15.034Z] ====================================== 00:10:09.272 [2024-11-20T11:24:15.034Z] busy:2104994560 (cyc) 00:10:09.272 [2024-11-20T11:24:15.034Z] total_run_count: 4115000 00:10:09.272 [2024-11-20T11:24:15.034Z] tsc_hz: 2100000000 (cyc) 00:10:09.272 [2024-11-20T11:24:15.034Z] ====================================== 00:10:09.272 [2024-11-20T11:24:15.034Z] poller_cost: 511 (cyc), 243 (nsec) 00:10:09.272 00:10:09.272 real 0m1.687s 00:10:09.272 user 0m1.450s 00:10:09.272 sys 0m0.126s 00:10:09.272 11:24:14 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.272 11:24:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:09.272 ************************************ 00:10:09.272 END TEST thread_poller_perf 00:10:09.272 ************************************ 00:10:09.272 11:24:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:09.272 00:10:09.272 real 0m3.673s 00:10:09.272 user 0m3.043s 00:10:09.272 sys 0m0.411s 00:10:09.272 11:24:14 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.272 11:24:14 thread -- common/autotest_common.sh@10 -- # set +x 00:10:09.272 ************************************ 00:10:09.272 END TEST thread 00:10:09.272 ************************************ 00:10:09.272 11:24:15 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:09.272 11:24:15 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:09.272 11:24:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:09.272 11:24:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.272 11:24:15 -- common/autotest_common.sh@10 -- # set +x 00:10:09.272 ************************************ 00:10:09.272 START TEST app_cmdline 00:10:09.272 ************************************ 00:10:09.272 11:24:15 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:09.531 * Looking for test storage... 00:10:09.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:09.531 11:24:15 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:09.531 11:24:15 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:10:09.531 11:24:15 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:09.531 11:24:15 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.531 11:24:15 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:09.531 11:24:15 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.531 11:24:15 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:09.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.531 --rc genhtml_branch_coverage=1 00:10:09.531 --rc genhtml_function_coverage=1 00:10:09.531 --rc genhtml_legend=1 00:10:09.531 --rc geninfo_all_blocks=1 00:10:09.531 --rc geninfo_unexecuted_blocks=1 00:10:09.531 00:10:09.531 ' 00:10:09.531 11:24:15 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:09.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.531 --rc genhtml_branch_coverage=1 00:10:09.531 --rc genhtml_function_coverage=1 00:10:09.531 --rc genhtml_legend=1 00:10:09.531 --rc geninfo_all_blocks=1 00:10:09.531 --rc geninfo_unexecuted_blocks=1 00:10:09.531 00:10:09.531 ' 00:10:09.531 11:24:15 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:09.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.531 --rc genhtml_branch_coverage=1 00:10:09.531 --rc genhtml_function_coverage=1 00:10:09.531 --rc genhtml_legend=1 00:10:09.531 --rc geninfo_all_blocks=1 00:10:09.531 --rc geninfo_unexecuted_blocks=1 00:10:09.531 00:10:09.531 ' 00:10:09.531 11:24:15 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:09.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.531 --rc genhtml_branch_coverage=1 00:10:09.531 --rc genhtml_function_coverage=1 00:10:09.531 --rc genhtml_legend=1 00:10:09.531 --rc geninfo_all_blocks=1 00:10:09.531 --rc geninfo_unexecuted_blocks=1 00:10:09.531 00:10:09.531 ' 00:10:09.531 11:24:15 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:09.531 11:24:15 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61228 00:10:09.531 11:24:15 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:09.531 11:24:15 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61228 00:10:09.531 11:24:15 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61228 ']' 00:10:09.531 11:24:15 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.531 11:24:15 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.531 11:24:15 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.531 11:24:15 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.531 11:24:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:09.790 [2024-11-20 11:24:15.356668] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:10:09.790 [2024-11-20 11:24:15.356855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61228 ] 00:10:10.049 [2024-11-20 11:24:15.559112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.050 [2024-11-20 11:24:15.731568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.985 11:24:16 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.985 11:24:16 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:10:10.985 11:24:16 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:11.571 { 00:10:11.571 "version": "SPDK v25.01-pre git sha1 f86091626", 00:10:11.571 "fields": { 00:10:11.571 "major": 25, 00:10:11.571 "minor": 1, 00:10:11.571 "patch": 0, 00:10:11.571 "suffix": "-pre", 00:10:11.571 "commit": "f86091626" 00:10:11.571 } 00:10:11.571 } 00:10:11.571 11:24:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:11.571 11:24:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:11.571 11:24:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:11.571 11:24:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:11.571 11:24:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:11.571 11:24:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:11.571 11:24:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:11.571 11:24:17 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.571 11:24:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:11.571 11:24:17 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.571 11:24:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:11.571 11:24:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:11.571 11:24:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:11.571 11:24:17 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:10:11.571 11:24:17 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:11.571 11:24:17 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:11.571 11:24:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.571 11:24:17 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:11.571 11:24:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.571 11:24:17 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:11.571 11:24:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:11.571 11:24:17 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:11.571 11:24:17 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:11.571 11:24:17 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:11.829 request: 00:10:11.829 { 00:10:11.829 "method": "env_dpdk_get_mem_stats", 00:10:11.829 "req_id": 1 00:10:11.829 } 00:10:11.829 Got JSON-RPC error response 00:10:11.829 response: 00:10:11.829 { 00:10:11.829 "code": -32601, 00:10:11.829 "message": "Method not found" 00:10:11.829 } 00:10:11.829 11:24:17 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:10:11.829 11:24:17 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:11.830 11:24:17 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:11.830 11:24:17 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:11.830 11:24:17 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61228 00:10:11.830 11:24:17 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61228 ']' 00:10:11.830 11:24:17 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61228 00:10:11.830 11:24:17 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:10:11.830 11:24:17 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.830 11:24:17 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61228 00:10:11.830 11:24:17 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.830 11:24:17 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.830 killing process with pid 61228 00:10:11.830 11:24:17 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61228' 00:10:11.830 11:24:17 app_cmdline -- common/autotest_common.sh@973 -- # kill 61228 00:10:11.830 11:24:17 app_cmdline -- common/autotest_common.sh@978 -- # wait 61228 00:10:14.356 00:10:14.356 real 0m5.055s 00:10:14.356 user 0m5.470s 00:10:14.356 sys 0m0.701s 00:10:14.356 11:24:20 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.356 ************************************ 00:10:14.356 END TEST app_cmdline 00:10:14.356 ************************************ 00:10:14.356 11:24:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:14.615 11:24:20 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:14.615 11:24:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:14.615 11:24:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.615 11:24:20 -- common/autotest_common.sh@10 -- # set +x 00:10:14.615 ************************************ 00:10:14.615 START TEST version 00:10:14.615 ************************************ 00:10:14.615 11:24:20 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:14.615 * Looking for test storage... 00:10:14.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:14.615 11:24:20 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:14.615 11:24:20 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:14.615 11:24:20 version -- common/autotest_common.sh@1693 -- # lcov --version 00:10:14.615 11:24:20 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:14.615 11:24:20 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.615 11:24:20 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.615 11:24:20 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.615 11:24:20 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.615 11:24:20 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.615 11:24:20 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.615 11:24:20 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.615 11:24:20 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.615 11:24:20 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.615 11:24:20 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.615 11:24:20 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.615 11:24:20 version -- scripts/common.sh@344 -- # case "$op" in 00:10:14.615 11:24:20 version -- scripts/common.sh@345 -- # : 1 00:10:14.615 11:24:20 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.615 11:24:20 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.615 11:24:20 version -- scripts/common.sh@365 -- # decimal 1 00:10:14.615 11:24:20 version -- scripts/common.sh@353 -- # local d=1 00:10:14.615 11:24:20 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.615 11:24:20 version -- scripts/common.sh@355 -- # echo 1 00:10:14.615 11:24:20 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.615 11:24:20 version -- scripts/common.sh@366 -- # decimal 2 00:10:14.615 11:24:20 version -- scripts/common.sh@353 -- # local d=2 00:10:14.615 11:24:20 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.615 11:24:20 version -- scripts/common.sh@355 -- # echo 2 00:10:14.615 11:24:20 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.615 11:24:20 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.615 11:24:20 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.615 11:24:20 version -- scripts/common.sh@368 -- # return 0 00:10:14.615 11:24:20 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.615 11:24:20 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:14.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.615 --rc genhtml_branch_coverage=1 00:10:14.615 --rc genhtml_function_coverage=1 00:10:14.615 --rc genhtml_legend=1 00:10:14.615 --rc geninfo_all_blocks=1 00:10:14.615 --rc geninfo_unexecuted_blocks=1 00:10:14.615 00:10:14.615 ' 00:10:14.615 11:24:20 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:14.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.615 --rc genhtml_branch_coverage=1 00:10:14.615 --rc genhtml_function_coverage=1 00:10:14.615 --rc genhtml_legend=1 00:10:14.615 --rc geninfo_all_blocks=1 00:10:14.615 --rc geninfo_unexecuted_blocks=1 00:10:14.615 00:10:14.615 ' 00:10:14.615 11:24:20 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:14.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.615 --rc genhtml_branch_coverage=1 00:10:14.615 --rc genhtml_function_coverage=1 00:10:14.615 --rc genhtml_legend=1 00:10:14.615 --rc geninfo_all_blocks=1 00:10:14.615 --rc geninfo_unexecuted_blocks=1 00:10:14.615 00:10:14.615 ' 00:10:14.615 11:24:20 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:14.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.615 --rc genhtml_branch_coverage=1 00:10:14.615 --rc genhtml_function_coverage=1 00:10:14.615 --rc genhtml_legend=1 00:10:14.615 --rc geninfo_all_blocks=1 00:10:14.615 --rc geninfo_unexecuted_blocks=1 00:10:14.615 00:10:14.615 ' 00:10:14.615 11:24:20 version -- app/version.sh@17 -- # get_header_version major 00:10:14.615 11:24:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:14.615 11:24:20 version -- app/version.sh@14 -- # cut -f2 00:10:14.615 11:24:20 version -- app/version.sh@14 -- # tr -d '"' 00:10:14.615 11:24:20 version -- app/version.sh@17 -- # major=25 00:10:14.615 11:24:20 version -- app/version.sh@18 -- # get_header_version minor 00:10:14.615 11:24:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:14.615 11:24:20 version -- app/version.sh@14 -- # cut -f2 00:10:14.615 11:24:20 version -- app/version.sh@14 -- # tr -d '"' 00:10:14.615 11:24:20 version -- app/version.sh@18 -- # minor=1 00:10:14.615 11:24:20 version -- app/version.sh@19 -- # get_header_version patch 00:10:14.615 11:24:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:14.615 11:24:20 version -- app/version.sh@14 -- # cut -f2 00:10:14.615 11:24:20 version -- app/version.sh@14 -- # tr -d '"' 00:10:14.615 11:24:20 version -- app/version.sh@19 -- # patch=0 00:10:14.615 11:24:20 version -- app/version.sh@20 -- # get_header_version suffix 00:10:14.616 11:24:20 version -- app/version.sh@14 -- # tr -d '"' 00:10:14.616 11:24:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:14.616 11:24:20 version -- app/version.sh@14 -- # cut -f2 00:10:14.874 11:24:20 version -- app/version.sh@20 -- # suffix=-pre 00:10:14.874 11:24:20 version -- app/version.sh@22 -- # version=25.1 00:10:14.874 11:24:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:14.874 11:24:20 version -- app/version.sh@28 -- # version=25.1rc0 00:10:14.874 11:24:20 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:14.874 11:24:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:14.874 11:24:20 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:14.874 11:24:20 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:14.874 00:10:14.874 real 0m0.285s 00:10:14.874 user 0m0.170s 00:10:14.874 sys 0m0.163s 00:10:14.874 ************************************ 00:10:14.874 END TEST version 00:10:14.874 ************************************ 00:10:14.874 11:24:20 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.874 11:24:20 version -- common/autotest_common.sh@10 -- # set +x 00:10:14.874 11:24:20 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:14.874 11:24:20 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:14.874 11:24:20 -- spdk/autotest.sh@194 -- # uname -s 00:10:14.874 11:24:20 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:14.874 11:24:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:14.874 11:24:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:14.874 11:24:20 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:10:14.874 11:24:20 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:14.874 11:24:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:14.874 11:24:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.874 11:24:20 -- common/autotest_common.sh@10 -- # set +x 00:10:14.874 ************************************ 00:10:14.874 START TEST blockdev_nvme 00:10:14.874 ************************************ 00:10:14.874 11:24:20 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:14.874 * Looking for test storage... 00:10:14.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:14.874 11:24:20 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:14.874 11:24:20 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:10:14.874 11:24:20 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:15.133 11:24:20 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.133 11:24:20 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:10:15.133 11:24:20 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.133 11:24:20 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:15.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.134 --rc genhtml_branch_coverage=1 00:10:15.134 --rc genhtml_function_coverage=1 00:10:15.134 --rc genhtml_legend=1 00:10:15.134 --rc geninfo_all_blocks=1 00:10:15.134 --rc geninfo_unexecuted_blocks=1 00:10:15.134 00:10:15.134 ' 00:10:15.134 11:24:20 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:15.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.134 --rc genhtml_branch_coverage=1 00:10:15.134 --rc genhtml_function_coverage=1 00:10:15.134 --rc genhtml_legend=1 00:10:15.134 --rc geninfo_all_blocks=1 00:10:15.134 --rc geninfo_unexecuted_blocks=1 00:10:15.134 00:10:15.134 ' 00:10:15.134 11:24:20 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:15.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.134 --rc genhtml_branch_coverage=1 00:10:15.134 --rc genhtml_function_coverage=1 00:10:15.134 --rc genhtml_legend=1 00:10:15.134 --rc geninfo_all_blocks=1 00:10:15.134 --rc geninfo_unexecuted_blocks=1 00:10:15.134 00:10:15.134 ' 00:10:15.134 11:24:20 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:15.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.134 --rc genhtml_branch_coverage=1 00:10:15.134 --rc genhtml_function_coverage=1 00:10:15.134 --rc genhtml_legend=1 00:10:15.134 --rc geninfo_all_blocks=1 00:10:15.134 --rc geninfo_unexecuted_blocks=1 00:10:15.134 00:10:15.134 ' 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:15.134 11:24:20 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61428 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61428 00:10:15.134 11:24:20 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:15.134 11:24:20 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61428 ']' 00:10:15.134 11:24:20 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.134 11:24:20 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.134 11:24:20 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.134 11:24:20 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.134 11:24:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:15.134 [2024-11-20 11:24:20.848099] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:10:15.134 [2024-11-20 11:24:20.848538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61428 ] 00:10:15.394 [2024-11-20 11:24:21.063547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.653 [2024-11-20 11:24:21.226130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.595 11:24:22 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.595 11:24:22 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:10:16.595 11:24:22 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:10:16.595 11:24:22 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:10:16.595 11:24:22 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:10:16.595 11:24:22 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:16.595 11:24:22 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:16.595 11:24:22 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:16.595 11:24:22 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.595 11:24:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:16.853 11:24:22 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.853 11:24:22 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:10:16.853 11:24:22 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.853 11:24:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:16.853 11:24:22 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.853 11:24:22 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:10:16.853 11:24:22 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:10:16.853 11:24:22 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.853 11:24:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:16.853 11:24:22 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.853 11:24:22 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:10:16.853 11:24:22 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.853 11:24:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:17.113 11:24:22 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.113 11:24:22 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:17.113 11:24:22 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.113 11:24:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:17.113 11:24:22 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.113 11:24:22 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:10:17.113 11:24:22 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:10:17.113 11:24:22 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.113 11:24:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:17.113 11:24:22 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:10:17.113 11:24:22 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.113 11:24:22 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:10:17.113 11:24:22 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:10:17.114 11:24:22 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "1146d837-b935-4ba8-80b2-3892d10c0cac"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1146d837-b935-4ba8-80b2-3892d10c0cac",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "bd5d00b3-c463-446f-956c-916e2210fd7d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "bd5d00b3-c463-446f-956c-916e2210fd7d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2d0722c4-89d7-4206-a721-a45a8dd19ff3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2d0722c4-89d7-4206-a721-a45a8dd19ff3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "bfd19ec6-0920-43d6-925c-df781e115286"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bfd19ec6-0920-43d6-925c-df781e115286",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "bfebb8cf-18c7-4858-bb65-c098c0b8cde4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bfebb8cf-18c7-4858-bb65-c098c0b8cde4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "5f31d1c1-a9e9-4afa-abf2-342b0a5fffea"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5f31d1c1-a9e9-4afa-abf2-342b0a5fffea",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:17.114 11:24:22 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:10:17.114 11:24:22 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:10:17.114 11:24:22 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:10:17.114 11:24:22 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61428 00:10:17.114 11:24:22 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61428 ']' 00:10:17.114 11:24:22 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61428 00:10:17.114 11:24:22 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:10:17.114 11:24:22 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.114 11:24:22 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61428 00:10:17.114 11:24:22 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.114 killing process with pid 61428 00:10:17.114 11:24:22 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.114 11:24:22 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61428' 00:10:17.114 11:24:22 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61428 00:10:17.114 11:24:22 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61428 00:10:20.399 11:24:25 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:20.399 11:24:25 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:20.399 11:24:25 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:20.399 11:24:25 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.399 11:24:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:20.399 ************************************ 00:10:20.399 START TEST bdev_hello_world 00:10:20.399 ************************************ 00:10:20.399 11:24:25 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:20.399 [2024-11-20 11:24:25.865586] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:10:20.399 [2024-11-20 11:24:25.865781] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61528 ] 00:10:20.399 [2024-11-20 11:24:26.065204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.659 [2024-11-20 11:24:26.192112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.227 [2024-11-20 11:24:26.922916] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:21.227 [2024-11-20 11:24:26.922974] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:21.227 [2024-11-20 11:24:26.922999] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:21.227 [2024-11-20 11:24:26.926395] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:21.227 [2024-11-20 11:24:26.927114] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:21.227 [2024-11-20 11:24:26.927162] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:21.227 [2024-11-20 11:24:26.927368] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:21.227 00:10:21.227 [2024-11-20 11:24:26.927402] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:22.613 00:10:22.613 real 0m2.459s 00:10:22.613 user 0m2.051s 00:10:22.613 sys 0m0.296s 00:10:22.613 11:24:28 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.613 ************************************ 00:10:22.613 END TEST bdev_hello_world 00:10:22.613 ************************************ 00:10:22.613 11:24:28 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:22.613 11:24:28 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:10:22.613 11:24:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:22.613 11:24:28 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.613 11:24:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:22.613 ************************************ 00:10:22.613 START TEST bdev_bounds 00:10:22.613 ************************************ 00:10:22.613 11:24:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:10:22.613 Process bdevio pid: 61576 00:10:22.613 11:24:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61576 00:10:22.613 11:24:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:22.613 11:24:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61576' 00:10:22.613 11:24:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61576 00:10:22.613 11:24:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:22.613 11:24:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61576 ']' 00:10:22.613 11:24:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.613 11:24:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:22.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.613 11:24:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.613 11:24:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:22.613 11:24:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:22.873 [2024-11-20 11:24:28.378754] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:10:22.873 [2024-11-20 11:24:28.379150] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61576 ] 00:10:22.873 [2024-11-20 11:24:28.575972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:23.132 [2024-11-20 11:24:28.714677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.132 [2024-11-20 11:24:28.714762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.132 [2024-11-20 11:24:28.714787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.065 11:24:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.065 11:24:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:10:24.065 11:24:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:24.065 I/O targets: 00:10:24.065 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:24.065 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:24.065 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:24.065 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:24.065 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:24.065 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:24.065 00:10:24.065 00:10:24.065 CUnit - A unit testing framework for C - Version 2.1-3 00:10:24.065 http://cunit.sourceforge.net/ 00:10:24.065 00:10:24.065 00:10:24.065 Suite: bdevio tests on: Nvme3n1 00:10:24.065 Test: blockdev write read block ...passed 00:10:24.065 Test: blockdev write zeroes read block ...passed 00:10:24.065 Test: blockdev write zeroes read no split ...passed 00:10:24.065 Test: blockdev write zeroes read split ...passed 00:10:24.065 Test: blockdev write zeroes read split partial ...passed 00:10:24.065 Test: blockdev reset ...[2024-11-20 11:24:29.674991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:10:24.065 [2024-11-20 11:24:29.679639] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:10:24.065 passed 00:10:24.065 Test: blockdev write read 8 blocks ...passed 00:10:24.065 Test: blockdev write read size > 128k ...passed 00:10:24.065 Test: blockdev write read invalid size ...passed 00:10:24.065 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:24.065 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:24.065 Test: blockdev write read max offset ...passed 00:10:24.065 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:24.065 Test: blockdev writev readv 8 blocks ...passed 00:10:24.065 Test: blockdev writev readv 30 x 1block ...passed 00:10:24.065 Test: blockdev writev readv block ...passed 00:10:24.065 Test: blockdev writev readv size > 128k ...passed 00:10:24.065 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:24.065 Test: blockdev comparev and writev ...[2024-11-20 11:24:29.689749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b780a000 len:0x1000 00:10:24.065 passed 00:10:24.065 Test: blockdev nvme passthru rw ...[2024-11-20 11:24:29.690014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:24.065 passed 00:10:24.065 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:24:29.690927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:10:24.065 Test: blockdev nvme admin passthru ...RP2 0x0 00:10:24.065 [2024-11-20 11:24:29.691147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:24.065 passed 00:10:24.065 Test: blockdev copy ...passed 00:10:24.065 Suite: bdevio tests on: Nvme2n3 00:10:24.065 Test: blockdev write read block ...passed 00:10:24.065 Test: blockdev write zeroes read block ...passed 00:10:24.065 Test: blockdev write zeroes read no split ...passed 00:10:24.065 Test: blockdev write zeroes read split ...passed 00:10:24.065 Test: blockdev write zeroes read split partial ...passed 00:10:24.065 Test: blockdev reset ...[2024-11-20 11:24:29.771698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:24.065 [2024-11-20 11:24:29.776345] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:10:24.065 00:10:24.065 Test: blockdev write read 8 blocks ...passed 00:10:24.065 Test: blockdev write read size > 128k ...passed 00:10:24.065 Test: blockdev write read invalid size ...passed 00:10:24.065 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:24.065 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:24.065 Test: blockdev write read max offset ...passed 00:10:24.065 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:24.065 Test: blockdev writev readv 8 blocks ...passed 00:10:24.065 Test: blockdev writev readv 30 x 1block ...passed 00:10:24.065 Test: blockdev writev readv block ...passed 00:10:24.065 Test: blockdev writev readv size > 128k ...passed 00:10:24.065 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:24.065 Test: blockdev comparev and writev ...[2024-11-20 11:24:29.785318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29b206000 len:0x1000 00:10:24.065 passed 00:10:24.065 Test: blockdev nvme passthru rw ...[2024-11-20 11:24:29.785587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:24.065 passed 00:10:24.065 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:24:29.786299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:24.065 passed 00:10:24.065 Test: blockdev nvme admin passthru ...[2024-11-20 11:24:29.786510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:24.065 passed 00:10:24.065 Test: blockdev copy ...passed 00:10:24.065 Suite: bdevio tests on: Nvme2n2 00:10:24.065 Test: blockdev write read block ...passed 00:10:24.065 Test: blockdev write zeroes read block ...passed 00:10:24.065 Test: blockdev write zeroes read no split ...passed 00:10:24.324 Test: blockdev write zeroes read split ...passed 00:10:24.324 Test: blockdev write zeroes read split partial ...passed 00:10:24.324 Test: blockdev reset ...[2024-11-20 11:24:29.868476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:24.324 [2024-11-20 11:24:29.873326] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:24.324 passed 00:10:24.324 Test: blockdev write read 8 blocks ...passed 00:10:24.324 Test: blockdev write read size > 128k ...passed 00:10:24.324 Test: blockdev write read invalid size ...passed 00:10:24.324 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:24.324 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:24.324 Test: blockdev write read max offset ...passed 00:10:24.324 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:24.324 Test: blockdev writev readv 8 blocks ...passed 00:10:24.324 Test: blockdev writev readv 30 x 1block ...passed 00:10:24.324 Test: blockdev writev readv block ...passed 00:10:24.324 Test: blockdev writev readv size > 128k ...passed 00:10:24.324 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:24.324 Test: blockdev comparev and writev ...[2024-11-20 11:24:29.881759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d303c000 len:0x1000 00:10:24.324 passed 00:10:24.324 Test: blockdev nvme passthru rw ...[2024-11-20 11:24:29.882059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:24.324 passed 00:10:24.324 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:24:29.882761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:24.324 passed 00:10:24.324 Test: blockdev nvme admin passthru ...[2024-11-20 11:24:29.882996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:24.324 passed 00:10:24.324 Test: blockdev copy ...passed 00:10:24.324 Suite: bdevio tests on: Nvme2n1 00:10:24.324 Test: blockdev write read block ...passed 00:10:24.324 Test: blockdev write zeroes read block ...passed 00:10:24.324 Test: blockdev write zeroes read no split ...passed 00:10:24.324 Test: blockdev write zeroes read split ...passed 00:10:24.324 Test: blockdev write zeroes read split partial ...passed 00:10:24.324 Test: blockdev reset ...[2024-11-20 11:24:29.960102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:24.324 [2024-11-20 11:24:29.964893] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:10:24.324 00:10:24.324 Test: blockdev write read 8 blocks ...passed 00:10:24.324 Test: blockdev write read size > 128k ...passed 00:10:24.324 Test: blockdev write read invalid size ...passed 00:10:24.324 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:24.324 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:24.324 Test: blockdev write read max offset ...passed 00:10:24.324 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:24.324 Test: blockdev writev readv 8 blocks ...passed 00:10:24.324 Test: blockdev writev readv 30 x 1block ...passed 00:10:24.324 Test: blockdev writev readv block ...passed 00:10:24.324 Test: blockdev writev readv size > 128k ...passed 00:10:24.324 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:24.324 Test: blockdev comparev and writev ...[2024-11-20 11:24:29.973170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3038000 len:0x1000 00:10:24.324 passed 00:10:24.324 Test: blockdev nvme passthru rw ...[2024-11-20 11:24:29.973429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:24.324 passed 00:10:24.324 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:24:29.974218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:24.324 passed 00:10:24.324 Test: blockdev nvme admin passthru ...[2024-11-20 11:24:29.974445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:24.324 passed 00:10:24.324 Test: blockdev copy ...passed 00:10:24.324 Suite: bdevio tests on: Nvme1n1 00:10:24.324 Test: blockdev write read block ...passed 00:10:24.324 Test: blockdev write zeroes read block ...passed 00:10:24.324 Test: blockdev write zeroes read no split ...passed 00:10:24.324 Test: blockdev write zeroes read split ...passed 00:10:24.324 Test: blockdev write zeroes read split partial ...passed 00:10:24.324 Test: blockdev reset ...[2024-11-20 11:24:30.057896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:24.324 passed 00:10:24.324 Test: blockdev write read 8 blocks ...[2024-11-20 11:24:30.062158] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:24.324 passed 00:10:24.324 Test: blockdev write read size > 128k ...passed 00:10:24.324 Test: blockdev write read invalid size ...passed 00:10:24.324 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:24.324 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:24.324 Test: blockdev write read max offset ...passed 00:10:24.324 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:24.324 Test: blockdev writev readv 8 blocks ...passed 00:10:24.324 Test: blockdev writev readv 30 x 1block ...passed 00:10:24.324 Test: blockdev writev readv block ...passed 00:10:24.324 Test: blockdev writev readv size > 128k ...passed 00:10:24.324 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:24.324 Test: blockdev comparev and writev ...[2024-11-20 11:24:30.070692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3034000 len:0x1000 00:10:24.324 passed 00:10:24.324 Test: blockdev nvme passthru rw ...[2024-11-20 11:24:30.070941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:24.324 passed 00:10:24.324 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:24:30.071784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:24.324 passed 00:10:24.324 Test: blockdev nvme admin passthru ...[2024-11-20 11:24:30.072002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:24.324 passed 00:10:24.324 Test: blockdev copy ...passed 00:10:24.324 Suite: bdevio tests on: Nvme0n1 00:10:24.325 Test: blockdev write read block ...passed 00:10:24.325 Test: blockdev write zeroes read block ...passed 00:10:24.584 Test: blockdev write zeroes read no split ...passed 00:10:24.584 Test: blockdev write zeroes read split ...passed 00:10:24.584 Test: blockdev write zeroes read split partial ...passed 00:10:24.584 Test: blockdev reset ...[2024-11-20 11:24:30.154711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:24.584 [2024-11-20 11:24:30.159214] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:24.584 passed 00:10:24.584 Test: blockdev write read 8 blocks ...passed 00:10:24.584 Test: blockdev write read size > 128k ...passed 00:10:24.584 Test: blockdev write read invalid size ...passed 00:10:24.584 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:24.584 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:24.584 Test: blockdev write read max offset ...passed 00:10:24.584 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:24.584 Test: blockdev writev readv 8 blocks ...passed 00:10:24.584 Test: blockdev writev readv 30 x 1block ...passed 00:10:24.584 Test: blockdev writev readv block ...passed 00:10:24.584 Test: blockdev writev readv size > 128k ...passed 00:10:24.584 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:24.584 Test: blockdev comparev and writev ...[2024-11-20 11:24:30.169255] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:24.584 separate metadata which is not supported yet. 00:10:24.584 passed 00:10:24.584 Test: blockdev nvme passthru rw ...passed 00:10:24.584 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:24:30.170283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:10:24.584 [2024-11-20 11:24:30.170608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:24.584 passed 00:10:24.584 Test: blockdev nvme admin passthru ...passed 00:10:24.584 Test: blockdev copy ...passed 00:10:24.584 00:10:24.584 Run Summary: Type Total Ran Passed Failed Inactive 00:10:24.584 suites 6 6 n/a 0 0 00:10:24.584 tests 138 138 138 0 0 00:10:24.584 asserts 893 893 893 0 n/a 00:10:24.584 00:10:24.584 Elapsed time = 1.571 seconds 00:10:24.584 0 00:10:24.584 11:24:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61576 00:10:24.584 11:24:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61576 ']' 00:10:24.584 11:24:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61576 00:10:24.584 11:24:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:10:24.584 11:24:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.584 11:24:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61576 00:10:24.584 killing process with pid 61576 00:10:24.584 11:24:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:24.584 11:24:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:24.584 11:24:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61576' 00:10:24.584 11:24:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61576 00:10:24.584 11:24:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61576 00:10:25.959 ************************************ 00:10:25.959 END TEST bdev_bounds 00:10:25.959 ************************************ 00:10:25.959 11:24:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:25.959 00:10:25.959 real 0m3.114s 00:10:25.959 user 0m7.902s 00:10:25.959 sys 0m0.449s 00:10:25.959 11:24:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.959 11:24:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:25.959 11:24:31 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:25.959 11:24:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:25.959 11:24:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.959 11:24:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:25.959 ************************************ 00:10:25.959 START TEST bdev_nbd 00:10:25.959 ************************************ 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61641 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61641 /var/tmp/spdk-nbd.sock 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61641 ']' 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:25.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.959 11:24:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:25.959 [2024-11-20 11:24:31.565561] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:10:25.959 [2024-11-20 11:24:31.566646] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.218 [2024-11-20 11:24:31.760272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.218 [2024-11-20 11:24:31.885600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.207 11:24:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.207 11:24:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:10:27.207 11:24:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:27.207 11:24:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:27.207 11:24:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:27.207 11:24:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:27.207 11:24:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:27.207 11:24:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:27.207 11:24:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:27.207 11:24:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:27.207 11:24:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:27.207 11:24:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:27.207 11:24:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:27.207 11:24:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:27.207 11:24:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:27.207 11:24:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:27.207 11:24:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:27.473 11:24:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:27.473 11:24:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:27.473 11:24:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:27.473 11:24:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:27.473 11:24:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:27.473 11:24:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:27.473 11:24:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:27.473 11:24:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:27.473 11:24:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:27.473 11:24:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:27.473 1+0 records in 00:10:27.473 1+0 records out 00:10:27.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585437 s, 7.0 MB/s 00:10:27.473 11:24:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.473 11:24:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:27.473 11:24:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.473 11:24:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:27.473 11:24:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:27.473 11:24:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:27.473 11:24:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:27.473 11:24:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:27.731 11:24:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:27.731 11:24:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:27.731 11:24:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:27.731 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:27.731 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:27.731 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:27.731 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:27.731 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:27.731 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:27.731 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:27.731 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:27.731 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:27.731 1+0 records in 00:10:27.731 1+0 records out 00:10:27.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000652909 s, 6.3 MB/s 00:10:27.731 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.731 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:27.731 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.731 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:27.731 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:27.731 11:24:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:27.731 11:24:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:27.731 11:24:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:27.989 11:24:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:27.989 11:24:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:27.989 11:24:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:27.989 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:10:27.989 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:27.989 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:27.989 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:27.989 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:10:27.989 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:27.989 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:27.989 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:27.989 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:27.989 1+0 records in 00:10:27.989 1+0 records out 00:10:27.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000664699 s, 6.2 MB/s 00:10:27.989 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.989 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:27.989 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.989 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:27.989 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:27.989 11:24:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:27.989 11:24:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:27.989 11:24:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:28.248 11:24:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:28.248 11:24:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:28.248 11:24:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:28.248 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:10:28.248 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:28.248 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:28.248 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:28.248 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:10:28.248 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:28.248 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:28.248 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:28.248 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:28.248 1+0 records in 00:10:28.248 1+0 records out 00:10:28.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065037 s, 6.3 MB/s 00:10:28.248 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.248 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:28.248 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.248 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:28.248 11:24:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:28.248 11:24:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:28.248 11:24:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:28.248 11:24:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:28.506 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:28.506 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:28.506 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:28.507 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:10:28.507 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:28.507 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:28.507 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:28.507 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:10:28.507 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:28.507 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:28.507 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:28.507 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:28.507 1+0 records in 00:10:28.507 1+0 records out 00:10:28.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703091 s, 5.8 MB/s 00:10:28.507 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.507 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:28.507 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.507 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:28.507 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:28.507 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:28.507 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:28.507 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:28.767 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:28.767 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:28.767 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:28.767 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:10:28.767 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:28.767 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:28.767 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:28.767 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:10:28.767 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:28.767 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:28.767 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:28.767 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:28.767 1+0 records in 00:10:28.767 1+0 records out 00:10:28.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668301 s, 6.1 MB/s 00:10:28.767 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.767 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:28.767 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.767 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:28.767 11:24:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:28.767 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:28.767 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:28.767 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:29.334 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:29.334 { 00:10:29.334 "nbd_device": "/dev/nbd0", 00:10:29.334 "bdev_name": "Nvme0n1" 00:10:29.334 }, 00:10:29.334 { 00:10:29.334 "nbd_device": "/dev/nbd1", 00:10:29.334 "bdev_name": "Nvme1n1" 00:10:29.334 }, 00:10:29.334 { 00:10:29.334 "nbd_device": "/dev/nbd2", 00:10:29.334 "bdev_name": "Nvme2n1" 00:10:29.334 }, 00:10:29.334 { 00:10:29.334 "nbd_device": "/dev/nbd3", 00:10:29.334 "bdev_name": "Nvme2n2" 00:10:29.334 }, 00:10:29.334 { 00:10:29.334 "nbd_device": "/dev/nbd4", 00:10:29.334 "bdev_name": "Nvme2n3" 00:10:29.334 }, 00:10:29.334 { 00:10:29.334 "nbd_device": "/dev/nbd5", 00:10:29.334 "bdev_name": "Nvme3n1" 00:10:29.334 } 00:10:29.334 ]' 00:10:29.334 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:29.334 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:29.334 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:29.334 { 00:10:29.334 "nbd_device": "/dev/nbd0", 00:10:29.334 "bdev_name": "Nvme0n1" 00:10:29.334 }, 00:10:29.334 { 00:10:29.334 "nbd_device": "/dev/nbd1", 00:10:29.334 "bdev_name": "Nvme1n1" 00:10:29.334 }, 00:10:29.334 { 00:10:29.334 "nbd_device": "/dev/nbd2", 00:10:29.334 "bdev_name": "Nvme2n1" 00:10:29.334 }, 00:10:29.334 { 00:10:29.334 "nbd_device": "/dev/nbd3", 00:10:29.334 "bdev_name": "Nvme2n2" 00:10:29.334 }, 00:10:29.334 { 00:10:29.334 "nbd_device": "/dev/nbd4", 00:10:29.334 "bdev_name": "Nvme2n3" 00:10:29.334 }, 00:10:29.334 { 00:10:29.334 "nbd_device": "/dev/nbd5", 00:10:29.334 "bdev_name": "Nvme3n1" 00:10:29.334 } 00:10:29.334 ]' 00:10:29.334 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:10:29.334 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:29.334 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:10:29.334 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:29.334 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:29.334 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:29.334 11:24:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:29.334 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:29.334 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:29.334 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:29.334 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:29.334 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:29.334 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:29.334 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:29.334 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:29.334 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:29.334 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:29.594 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:29.594 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:29.594 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:29.594 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:29.594 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:29.594 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:29.594 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:29.594 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:29.594 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:29.594 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:30.161 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:30.161 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:30.161 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:30.161 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:30.161 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:30.161 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:30.161 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:30.161 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:30.161 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:30.161 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:30.161 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:30.161 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:30.161 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:30.161 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:30.161 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:30.161 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:30.161 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:30.161 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:30.161 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:30.161 11:24:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:30.419 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:30.420 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:30.420 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:30.420 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:30.420 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:30.420 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:30.420 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:30.420 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:30.420 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:30.420 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:30.679 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:30.679 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:30.679 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:30.679 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:30.679 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:30.679 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:30.679 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:30.679 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:30.679 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:30.679 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:30.679 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:30.936 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:30.936 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:30.936 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:31.194 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:31.195 11:24:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:31.452 /dev/nbd0 00:10:31.452 11:24:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:31.452 11:24:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:31.452 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:31.452 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:31.452 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:31.452 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:31.452 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:31.452 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:31.452 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:31.452 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:31.452 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:31.452 1+0 records in 00:10:31.452 1+0 records out 00:10:31.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479203 s, 8.5 MB/s 00:10:31.452 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.452 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:31.452 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.452 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:31.452 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:31.452 11:24:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:31.452 11:24:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:31.452 11:24:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:10:31.711 /dev/nbd1 00:10:31.711 11:24:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:31.711 11:24:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:31.711 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:31.711 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:31.711 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:31.711 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:31.711 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:31.711 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:31.711 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:31.711 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:31.711 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:31.711 1+0 records in 00:10:31.711 1+0 records out 00:10:31.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054252 s, 7.5 MB/s 00:10:31.711 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.711 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:31.711 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.711 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:31.711 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:31.711 11:24:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:31.711 11:24:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:31.711 11:24:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:10:31.970 /dev/nbd10 00:10:31.970 11:24:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:31.970 11:24:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:31.970 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:10:31.970 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:31.970 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:31.970 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:31.970 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:10:31.970 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:31.970 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:31.970 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:31.970 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:31.970 1+0 records in 00:10:31.970 1+0 records out 00:10:31.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520006 s, 7.9 MB/s 00:10:31.970 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.970 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:31.970 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.970 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:31.970 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:31.970 11:24:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:31.970 11:24:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:31.970 11:24:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:10:32.229 /dev/nbd11 00:10:32.229 11:24:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:32.229 11:24:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:32.229 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:10:32.229 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:32.229 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:32.229 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:32.229 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:10:32.229 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:32.229 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:32.229 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:32.229 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:32.229 1+0 records in 00:10:32.229 1+0 records out 00:10:32.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715742 s, 5.7 MB/s 00:10:32.229 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:32.229 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:32.229 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:32.229 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:32.229 11:24:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:32.229 11:24:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:32.229 11:24:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:32.229 11:24:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:10:32.827 /dev/nbd12 00:10:32.827 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:32.827 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:32.827 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:10:32.827 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:32.827 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:32.827 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:32.827 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:10:32.827 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:32.827 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:32.827 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:32.827 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:32.827 1+0 records in 00:10:32.827 1+0 records out 00:10:32.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615335 s, 6.7 MB/s 00:10:32.827 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:32.828 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:32.828 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:32.828 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:32.828 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:32.828 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:32.828 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:32.828 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:10:32.828 /dev/nbd13 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:33.086 1+0 records in 00:10:33.086 1+0 records out 00:10:33.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565753 s, 7.2 MB/s 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.086 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:33.344 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:33.344 { 00:10:33.344 "nbd_device": "/dev/nbd0", 00:10:33.344 "bdev_name": "Nvme0n1" 00:10:33.344 }, 00:10:33.344 { 00:10:33.344 "nbd_device": "/dev/nbd1", 00:10:33.344 "bdev_name": "Nvme1n1" 00:10:33.344 }, 00:10:33.344 { 00:10:33.344 "nbd_device": "/dev/nbd10", 00:10:33.344 "bdev_name": "Nvme2n1" 00:10:33.344 }, 00:10:33.344 { 00:10:33.344 "nbd_device": "/dev/nbd11", 00:10:33.344 "bdev_name": "Nvme2n2" 00:10:33.344 }, 00:10:33.344 { 00:10:33.344 "nbd_device": "/dev/nbd12", 00:10:33.344 "bdev_name": "Nvme2n3" 00:10:33.344 }, 00:10:33.344 { 00:10:33.344 "nbd_device": "/dev/nbd13", 00:10:33.344 "bdev_name": "Nvme3n1" 00:10:33.344 } 00:10:33.344 ]' 00:10:33.344 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:33.344 { 00:10:33.344 "nbd_device": "/dev/nbd0", 00:10:33.344 "bdev_name": "Nvme0n1" 00:10:33.344 }, 00:10:33.345 { 00:10:33.345 "nbd_device": "/dev/nbd1", 00:10:33.345 "bdev_name": "Nvme1n1" 00:10:33.345 }, 00:10:33.345 { 00:10:33.345 "nbd_device": "/dev/nbd10", 00:10:33.345 "bdev_name": "Nvme2n1" 00:10:33.345 }, 00:10:33.345 { 00:10:33.345 "nbd_device": "/dev/nbd11", 00:10:33.345 "bdev_name": "Nvme2n2" 00:10:33.345 }, 00:10:33.345 { 00:10:33.345 "nbd_device": "/dev/nbd12", 00:10:33.345 "bdev_name": "Nvme2n3" 00:10:33.345 }, 00:10:33.345 { 00:10:33.345 "nbd_device": "/dev/nbd13", 00:10:33.345 "bdev_name": "Nvme3n1" 00:10:33.345 } 00:10:33.345 ]' 00:10:33.345 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:33.345 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:33.345 /dev/nbd1 00:10:33.345 /dev/nbd10 00:10:33.345 /dev/nbd11 00:10:33.345 /dev/nbd12 00:10:33.345 /dev/nbd13' 00:10:33.345 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:33.345 /dev/nbd1 00:10:33.345 /dev/nbd10 00:10:33.345 /dev/nbd11 00:10:33.345 /dev/nbd12 00:10:33.345 /dev/nbd13' 00:10:33.345 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:33.345 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:10:33.345 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:10:33.345 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:10:33.345 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:10:33.345 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:10:33.345 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:33.345 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:33.345 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:33.345 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:33.345 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:33.345 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:33.345 256+0 records in 00:10:33.345 256+0 records out 00:10:33.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00952941 s, 110 MB/s 00:10:33.345 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:33.345 11:24:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:33.345 256+0 records in 00:10:33.345 256+0 records out 00:10:33.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127952 s, 8.2 MB/s 00:10:33.345 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:33.345 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:33.604 256+0 records in 00:10:33.604 256+0 records out 00:10:33.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135774 s, 7.7 MB/s 00:10:33.604 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:33.604 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:33.604 256+0 records in 00:10:33.604 256+0 records out 00:10:33.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134077 s, 7.8 MB/s 00:10:33.604 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:33.604 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:33.862 256+0 records in 00:10:33.862 256+0 records out 00:10:33.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134684 s, 7.8 MB/s 00:10:33.862 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:33.862 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:34.121 256+0 records in 00:10:34.121 256+0 records out 00:10:34.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137061 s, 7.7 MB/s 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:34.121 256+0 records in 00:10:34.121 256+0 records out 00:10:34.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136663 s, 7.7 MB/s 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.121 11:24:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:34.380 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:34.380 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:34.380 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:34.380 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:34.380 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:34.380 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:34.380 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:34.380 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:34.380 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.380 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:34.638 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:34.898 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:34.898 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:34.898 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:34.898 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:34.898 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:34.898 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:34.898 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:34.898 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.898 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:35.157 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:35.157 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:35.157 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:35.157 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.157 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.157 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:35.157 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:35.157 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.157 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:35.157 11:24:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:35.415 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:35.415 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:35.415 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:35.415 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.415 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.415 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:35.415 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:35.415 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.415 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:35.415 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:35.673 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:35.673 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:35.673 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:35.673 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.673 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.673 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:35.673 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:35.673 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.673 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:35.673 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:35.930 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:35.930 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:35.930 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:35.930 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.930 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.930 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:35.930 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:35.930 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.930 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:35.930 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:35.930 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:36.188 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:36.188 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:36.188 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:36.188 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:36.188 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:36.188 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:36.188 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:36.188 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:36.188 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:36.188 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:36.188 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:36.188 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:36.188 11:24:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:36.188 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.188 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:10:36.188 11:24:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:36.446 malloc_lvol_verify 00:10:36.446 11:24:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:36.704 5bf5b18c-4c20-4764-b01e-9711de8774e9 00:10:36.704 11:24:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:36.962 5e366ea1-72cf-4d26-8dfb-65ea4768fc97 00:10:36.962 11:24:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:37.220 /dev/nbd0 00:10:37.220 11:24:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:10:37.220 11:24:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:10:37.220 11:24:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:10:37.220 11:24:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:10:37.220 11:24:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:10:37.220 mke2fs 1.47.0 (5-Feb-2023) 00:10:37.220 Discarding device blocks: 0/4096 done 00:10:37.220 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:37.220 00:10:37.220 Allocating group tables: 0/1 done 00:10:37.220 Writing inode tables: 0/1 done 00:10:37.220 Creating journal (1024 blocks): done 00:10:37.220 Writing superblocks and filesystem accounting information: 0/1 done 00:10:37.220 00:10:37.220 11:24:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:37.220 11:24:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:37.220 11:24:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:37.220 11:24:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:37.220 11:24:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:37.220 11:24:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:37.220 11:24:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:37.785 11:24:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:37.785 11:24:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:37.785 11:24:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:37.785 11:24:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:37.785 11:24:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:37.785 11:24:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:37.785 11:24:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:37.785 11:24:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:37.785 11:24:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61641 00:10:37.785 11:24:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61641 ']' 00:10:37.785 11:24:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61641 00:10:37.785 11:24:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:10:37.785 11:24:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.785 11:24:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61641 00:10:37.785 killing process with pid 61641 00:10:37.785 11:24:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.785 11:24:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.785 11:24:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61641' 00:10:37.785 11:24:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61641 00:10:37.785 11:24:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61641 00:10:39.159 11:24:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:39.159 00:10:39.159 real 0m13.322s 00:10:39.159 user 0m17.808s 00:10:39.159 sys 0m5.205s 00:10:39.159 11:24:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.159 11:24:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:39.159 ************************************ 00:10:39.159 END TEST bdev_nbd 00:10:39.159 ************************************ 00:10:39.159 11:24:44 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:10:39.159 11:24:44 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:10:39.159 skipping fio tests on NVMe due to multi-ns failures. 00:10:39.160 11:24:44 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:39.160 11:24:44 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:39.160 11:24:44 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:39.160 11:24:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:39.160 11:24:44 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.160 11:24:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:39.160 ************************************ 00:10:39.160 START TEST bdev_verify 00:10:39.160 ************************************ 00:10:39.160 11:24:44 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:39.160 [2024-11-20 11:24:44.901119] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:10:39.160 [2024-11-20 11:24:44.901265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62052 ] 00:10:39.418 [2024-11-20 11:24:45.081633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:39.676 [2024-11-20 11:24:45.220252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.676 [2024-11-20 11:24:45.220296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.612 Running I/O for 5 seconds... 00:10:42.507 18688.00 IOPS, 73.00 MiB/s [2024-11-20T11:24:49.645Z] 17792.00 IOPS, 69.50 MiB/s [2024-11-20T11:24:50.581Z] 17898.67 IOPS, 69.92 MiB/s [2024-11-20T11:24:51.515Z] 17792.00 IOPS, 69.50 MiB/s [2024-11-20T11:24:51.515Z] 17740.80 IOPS, 69.30 MiB/s 00:10:45.753 Latency(us) 00:10:45.753 [2024-11-20T11:24:51.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:45.753 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.753 Verification LBA range: start 0x0 length 0xbd0bd 00:10:45.753 Nvme0n1 : 5.06 1467.77 5.73 0.00 0.00 86926.55 16477.62 88879.30 00:10:45.753 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.753 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:45.753 Nvme0n1 : 5.06 1442.15 5.63 0.00 0.00 88470.92 15853.47 184749.10 00:10:45.753 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.753 Verification LBA range: start 0x0 length 0xa0000 00:10:45.753 Nvme1n1 : 5.06 1467.34 5.73 0.00 0.00 86806.83 18474.91 80890.15 00:10:45.753 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.753 Verification LBA range: start 0xa0000 length 0xa0000 00:10:45.753 Nvme1n1 : 5.06 1441.48 5.63 0.00 0.00 88287.36 18724.57 167772.16 00:10:45.753 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.753 Verification LBA range: start 0x0 length 0x80000 00:10:45.753 Nvme2n1 : 5.06 1466.64 5.73 0.00 0.00 86643.95 19972.88 77894.22 00:10:45.753 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.753 Verification LBA range: start 0x80000 length 0x80000 00:10:45.753 Nvme2n1 : 5.06 1440.85 5.63 0.00 0.00 88141.76 19473.55 170768.09 00:10:45.753 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.753 Verification LBA range: start 0x0 length 0x80000 00:10:45.753 Nvme2n2 : 5.06 1466.00 5.73 0.00 0.00 86472.29 19473.55 81389.47 00:10:45.753 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.753 Verification LBA range: start 0x80000 length 0x80000 00:10:45.753 Nvme2n2 : 5.07 1440.23 5.63 0.00 0.00 87967.65 18849.40 180754.53 00:10:45.753 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.753 Verification LBA range: start 0x0 length 0x80000 00:10:45.753 Nvme2n3 : 5.08 1472.68 5.75 0.00 0.00 85876.22 11734.06 84884.72 00:10:45.753 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.753 Verification LBA range: start 0x80000 length 0x80000 00:10:45.753 Nvme2n3 : 5.08 1448.56 5.66 0.00 0.00 87305.48 6647.22 183750.46 00:10:45.753 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:45.753 Verification LBA range: start 0x0 length 0x20000 00:10:45.753 Nvme3n1 : 5.11 1477.98 5.77 0.00 0.00 85496.02 8800.55 88379.98 00:10:45.753 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:45.753 Verification LBA range: start 0x20000 length 0x20000 00:10:45.753 Nvme3n1 : 5.09 1457.31 5.69 0.00 0.00 86681.28 10236.10 186746.39 00:10:45.753 [2024-11-20T11:24:51.515Z] =================================================================================================================== 00:10:45.753 [2024-11-20T11:24:51.515Z] Total : 17488.98 68.32 0.00 0.00 87079.85 6647.22 186746.39 00:10:47.129 00:10:47.129 real 0m7.990s 00:10:47.129 user 0m14.715s 00:10:47.129 sys 0m0.332s 00:10:47.129 11:24:52 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.129 11:24:52 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:47.129 ************************************ 00:10:47.129 END TEST bdev_verify 00:10:47.129 ************************************ 00:10:47.129 11:24:52 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:47.129 11:24:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:47.129 11:24:52 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.129 11:24:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:47.129 ************************************ 00:10:47.129 START TEST bdev_verify_big_io 00:10:47.129 ************************************ 00:10:47.129 11:24:52 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:47.436 [2024-11-20 11:24:52.968747] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:10:47.436 [2024-11-20 11:24:52.969141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62156 ] 00:10:47.436 [2024-11-20 11:24:53.172952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:47.695 [2024-11-20 11:24:53.301325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.695 [2024-11-20 11:24:53.301375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.626 Running I/O for 5 seconds... 00:10:52.425 405.00 IOPS, 25.31 MiB/s [2024-11-20T11:25:00.089Z] 1316.50 IOPS, 82.28 MiB/s [2024-11-20T11:25:00.347Z] 2241.33 IOPS, 140.08 MiB/s 00:10:54.585 Latency(us) 00:10:54.585 [2024-11-20T11:25:00.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:54.585 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:54.585 Verification LBA range: start 0x0 length 0xbd0b 00:10:54.586 Nvme0n1 : 5.73 134.04 8.38 0.00 0.00 925295.34 23218.47 1046578.71 00:10:54.586 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:54.586 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:54.586 Nvme0n1 : 5.83 131.75 8.23 0.00 0.00 917559.59 78892.86 858833.68 00:10:54.586 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:54.586 Verification LBA range: start 0x0 length 0xa000 00:10:54.586 Nvme1n1 : 5.73 133.95 8.37 0.00 0.00 893281.69 116342.00 858833.68 00:10:54.586 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:54.586 Verification LBA range: start 0xa000 length 0xa000 00:10:54.586 Nvme1n1 : 5.83 131.66 8.23 0.00 0.00 888043.68 101861.67 818887.92 00:10:54.586 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:54.586 Verification LBA range: start 0x0 length 0x8000 00:10:54.586 Nvme2n1 : 5.83 136.89 8.56 0.00 0.00 848835.35 92873.87 766958.45 00:10:54.586 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:54.586 Verification LBA range: start 0x8000 length 0x8000 00:10:54.586 Nvme2n1 : 5.87 134.61 8.41 0.00 0.00 844357.45 32955.25 830871.65 00:10:54.586 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:54.586 Verification LBA range: start 0x0 length 0x8000 00:10:54.586 Nvme2n2 : 5.85 142.11 8.88 0.00 0.00 797713.25 20721.86 778942.17 00:10:54.586 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:54.586 Verification LBA range: start 0x8000 length 0x8000 00:10:54.586 Nvme2n2 : 5.88 141.57 8.85 0.00 0.00 784356.19 4275.44 838860.80 00:10:54.586 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:54.586 Verification LBA range: start 0x0 length 0x8000 00:10:54.586 Nvme2n3 : 5.88 144.95 9.06 0.00 0.00 760486.80 27088.21 966687.21 00:10:54.586 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:54.586 Verification LBA range: start 0x8000 length 0x8000 00:10:54.586 Nvme2n3 : 5.89 146.73 9.17 0.00 0.00 733414.61 7926.74 858833.68 00:10:54.586 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:54.586 Verification LBA range: start 0x0 length 0x2000 00:10:54.586 Nvme3n1 : 5.92 154.87 9.68 0.00 0.00 691452.00 2793.08 1741634.80 00:10:54.586 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:54.586 Verification LBA range: start 0x2000 length 0x2000 00:10:54.586 Nvme3n1 : 5.83 128.21 8.01 0.00 0.00 956017.71 29959.31 1038589.56 00:10:54.586 [2024-11-20T11:25:00.348Z] =================================================================================================================== 00:10:54.586 [2024-11-20T11:25:00.348Z] Total : 1661.33 103.83 0.00 0.00 832007.98 2793.08 1741634.80 00:10:56.488 00:10:56.488 real 0m9.013s 00:10:56.488 user 0m16.668s 00:10:56.488 sys 0m0.363s 00:10:56.488 11:25:01 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.488 11:25:01 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:56.488 ************************************ 00:10:56.488 END TEST bdev_verify_big_io 00:10:56.488 ************************************ 00:10:56.488 11:25:01 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:56.488 11:25:01 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:56.488 11:25:01 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.488 11:25:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:56.488 ************************************ 00:10:56.488 START TEST bdev_write_zeroes 00:10:56.488 ************************************ 00:10:56.488 11:25:01 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:56.488 [2024-11-20 11:25:02.019856] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:10:56.488 [2024-11-20 11:25:02.020010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62276 ] 00:10:56.488 [2024-11-20 11:25:02.199758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.746 [2024-11-20 11:25:02.337398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.680 Running I/O for 1 seconds... 00:10:58.614 47936.00 IOPS, 187.25 MiB/s 00:10:58.614 Latency(us) 00:10:58.614 [2024-11-20T11:25:04.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:58.614 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:58.614 Nvme0n1 : 1.03 7923.79 30.95 0.00 0.00 16104.01 13294.45 33704.23 00:10:58.614 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:58.614 Nvme1n1 : 1.04 7910.16 30.90 0.00 0.00 16102.33 13294.45 32705.58 00:10:58.614 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:58.614 Nvme2n1 : 1.04 7897.84 30.85 0.00 0.00 16079.14 13232.03 31582.11 00:10:58.614 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:58.614 Nvme2n2 : 1.04 7884.79 30.80 0.00 0.00 15985.23 8925.38 31082.79 00:10:58.614 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:58.614 Nvme2n3 : 1.04 7870.60 30.74 0.00 0.00 15962.81 7115.34 31082.79 00:10:58.614 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:58.614 Nvme3n1 : 1.04 7796.98 30.46 0.00 0.00 16080.62 13232.03 33953.89 00:10:58.614 [2024-11-20T11:25:04.376Z] =================================================================================================================== 00:10:58.614 [2024-11-20T11:25:04.376Z] Total : 47284.15 184.70 0.00 0.00 16052.32 7115.34 33953.89 00:10:59.991 00:10:59.991 real 0m3.661s 00:10:59.991 user 0m3.249s 00:10:59.991 sys 0m0.287s 00:10:59.991 11:25:05 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.991 ************************************ 00:10:59.991 END TEST bdev_write_zeroes 00:10:59.991 11:25:05 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:59.991 ************************************ 00:10:59.991 11:25:05 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:59.991 11:25:05 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:59.991 11:25:05 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.991 11:25:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:59.991 ************************************ 00:10:59.991 START TEST bdev_json_nonenclosed 00:10:59.991 ************************************ 00:10:59.991 11:25:05 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:00.249 [2024-11-20 11:25:05.752701] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:11:00.249 [2024-11-20 11:25:05.752880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62329 ] 00:11:00.249 [2024-11-20 11:25:05.945819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.508 [2024-11-20 11:25:06.154853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.508 [2024-11-20 11:25:06.155028] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:00.508 [2024-11-20 11:25:06.155074] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:00.508 [2024-11-20 11:25:06.155097] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:00.766 00:11:00.766 real 0m0.841s 00:11:00.766 user 0m0.542s 00:11:00.766 sys 0m0.191s 00:11:00.766 11:25:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.766 ************************************ 00:11:00.766 END TEST bdev_json_nonenclosed 00:11:00.767 11:25:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:00.767 ************************************ 00:11:00.767 11:25:06 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:00.767 11:25:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:00.767 11:25:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.767 11:25:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:00.767 ************************************ 00:11:00.767 START TEST bdev_json_nonarray 00:11:00.767 ************************************ 00:11:00.767 11:25:06 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:01.025 [2024-11-20 11:25:06.614718] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:11:01.025 [2024-11-20 11:25:06.614863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62360 ] 00:11:01.284 [2024-11-20 11:25:06.794728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.284 [2024-11-20 11:25:06.957990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.284 [2024-11-20 11:25:06.958132] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:01.284 [2024-11-20 11:25:06.958164] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:01.284 [2024-11-20 11:25:06.958180] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:01.543 00:11:01.543 real 0m0.767s 00:11:01.543 user 0m0.488s 00:11:01.543 sys 0m0.173s 00:11:01.543 11:25:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.543 ************************************ 00:11:01.543 END TEST bdev_json_nonarray 00:11:01.543 ************************************ 00:11:01.543 11:25:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:01.803 11:25:07 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:11:01.803 11:25:07 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:11:01.803 11:25:07 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:11:01.803 11:25:07 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:11:01.803 11:25:07 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:11:01.803 11:25:07 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:01.803 11:25:07 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:01.803 11:25:07 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:11:01.803 11:25:07 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:11:01.803 11:25:07 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:11:01.803 11:25:07 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:11:01.803 00:11:01.803 real 0m46.850s 00:11:01.803 user 1m8.775s 00:11:01.803 sys 0m8.450s 00:11:01.803 11:25:07 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.803 ************************************ 00:11:01.803 END TEST blockdev_nvme 00:11:01.803 11:25:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:01.803 ************************************ 00:11:01.803 11:25:07 -- spdk/autotest.sh@209 -- # uname -s 00:11:01.803 11:25:07 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:11:01.803 11:25:07 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:11:01.803 11:25:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:01.803 11:25:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.803 11:25:07 -- common/autotest_common.sh@10 -- # set +x 00:11:01.803 ************************************ 00:11:01.803 START TEST blockdev_nvme_gpt 00:11:01.803 ************************************ 00:11:01.803 11:25:07 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:11:01.803 * Looking for test storage... 00:11:01.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:01.803 11:25:07 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:01.803 11:25:07 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:11:01.803 11:25:07 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:02.062 11:25:07 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.062 11:25:07 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:11:02.062 11:25:07 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.062 11:25:07 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:02.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.062 --rc genhtml_branch_coverage=1 00:11:02.062 --rc genhtml_function_coverage=1 00:11:02.062 --rc genhtml_legend=1 00:11:02.062 --rc geninfo_all_blocks=1 00:11:02.062 --rc geninfo_unexecuted_blocks=1 00:11:02.062 00:11:02.062 ' 00:11:02.062 11:25:07 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:02.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.062 --rc genhtml_branch_coverage=1 00:11:02.062 --rc genhtml_function_coverage=1 00:11:02.062 --rc genhtml_legend=1 00:11:02.062 --rc geninfo_all_blocks=1 00:11:02.062 --rc geninfo_unexecuted_blocks=1 00:11:02.062 00:11:02.062 ' 00:11:02.062 11:25:07 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:02.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.062 --rc genhtml_branch_coverage=1 00:11:02.062 --rc genhtml_function_coverage=1 00:11:02.062 --rc genhtml_legend=1 00:11:02.062 --rc geninfo_all_blocks=1 00:11:02.062 --rc geninfo_unexecuted_blocks=1 00:11:02.062 00:11:02.062 ' 00:11:02.062 11:25:07 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:02.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.062 --rc genhtml_branch_coverage=1 00:11:02.062 --rc genhtml_function_coverage=1 00:11:02.062 --rc genhtml_legend=1 00:11:02.062 --rc geninfo_all_blocks=1 00:11:02.062 --rc geninfo_unexecuted_blocks=1 00:11:02.062 00:11:02.062 ' 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62444 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62444 00:11:02.062 11:25:07 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62444 ']' 00:11:02.062 11:25:07 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.062 11:25:07 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.062 11:25:07 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.062 11:25:07 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.062 11:25:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:02.062 11:25:07 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:02.062 [2024-11-20 11:25:07.742215] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:11:02.062 [2024-11-20 11:25:07.742348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62444 ] 00:11:02.321 [2024-11-20 11:25:07.931792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.579 [2024-11-20 11:25:08.101220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.513 11:25:09 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.513 11:25:09 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:11:03.513 11:25:09 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:11:03.514 11:25:09 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:11:03.514 11:25:09 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:03.772 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:04.031 Waiting for block devices as requested 00:11:04.031 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:04.289 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:04.289 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:04.548 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:09.812 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:09.812 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:11:09.812 11:25:15 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:09.812 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:11:09.812 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:11:09.812 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:11:09.812 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:11:09.812 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:11:09.812 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:11:09.812 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:11:09.812 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:11:09.812 BYT; 00:11:09.812 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:11:09.812 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:11:09.812 BYT; 00:11:09.812 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:11:09.812 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:11:09.812 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:11:09.812 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:11:09.812 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:11:09.812 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:09.812 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:11:09.812 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:11:09.813 11:25:15 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:11:09.813 11:25:15 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:11:09.813 11:25:15 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:09.813 11:25:15 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:11:09.813 11:25:15 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:11:09.813 11:25:15 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:09.813 11:25:15 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:11:09.813 11:25:15 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:09.813 11:25:15 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:09.813 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:09.813 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:11:09.813 11:25:15 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:11:09.813 11:25:15 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:11:09.813 11:25:15 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:09.813 11:25:15 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:11:09.813 11:25:15 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:11:09.813 11:25:15 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:09.813 11:25:15 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:11:09.813 11:25:15 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:09.813 11:25:15 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:09.813 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:09.813 11:25:15 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:11:10.749 The operation has completed successfully. 00:11:10.749 11:25:16 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:11:11.684 The operation has completed successfully. 00:11:11.684 11:25:17 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:12.251 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:13.186 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:13.186 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:13.186 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:13.186 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:13.186 11:25:18 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:11:13.186 11:25:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.186 11:25:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:13.186 [] 00:11:13.186 11:25:18 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.186 11:25:18 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:11:13.186 11:25:18 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:11:13.186 11:25:18 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:11:13.186 11:25:18 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:13.186 11:25:18 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:11:13.186 11:25:18 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.186 11:25:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:13.754 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.754 11:25:19 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:11:13.754 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.754 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:13.754 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.754 11:25:19 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:11:13.754 11:25:19 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:11:13.754 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.754 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:13.754 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.754 11:25:19 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:11:13.754 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.754 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:13.754 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.754 11:25:19 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:13.754 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.754 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:13.754 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.754 11:25:19 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:11:13.754 11:25:19 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:11:13.754 11:25:19 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:11:13.754 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.754 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:13.754 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.754 11:25:19 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:11:13.754 11:25:19 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:11:13.755 11:25:19 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "dc1ff450-c441-4f40-86a7-af6f31bbee67"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "dc1ff450-c441-4f40-86a7-af6f31bbee67",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2a0af127-70e8-465a-9a60-0dcca73f800c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2a0af127-70e8-465a-9a60-0dcca73f800c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "3e33bf74-0507-4268-b492-3449cc4c1d52"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3e33bf74-0507-4268-b492-3449cc4c1d52",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6ab6177b-9e32-4096-995d-04c7ec75d246"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6ab6177b-9e32-4096-995d-04c7ec75d246",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "5c27e7da-b8b6-4348-88e1-6c60e0a563a0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5c27e7da-b8b6-4348-88e1-6c60e0a563a0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:11:13.755 11:25:19 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:11:13.755 11:25:19 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:11:13.755 11:25:19 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:11:13.755 11:25:19 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62444 00:11:13.755 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62444 ']' 00:11:13.755 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62444 00:11:13.755 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:11:13.755 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.755 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62444 00:11:14.013 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.013 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.013 killing process with pid 62444 00:11:14.013 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62444' 00:11:14.013 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62444 00:11:14.013 11:25:19 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62444 00:11:16.541 11:25:22 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:16.541 11:25:22 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:16.541 11:25:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:16.541 11:25:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.541 11:25:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:16.541 ************************************ 00:11:16.541 START TEST bdev_hello_world 00:11:16.541 ************************************ 00:11:16.542 11:25:22 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:16.542 [2024-11-20 11:25:22.257051] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:11:16.542 [2024-11-20 11:25:22.257227] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63091 ] 00:11:16.800 [2024-11-20 11:25:22.454985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.060 [2024-11-20 11:25:22.591451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.629 [2024-11-20 11:25:23.273163] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:17.629 [2024-11-20 11:25:23.273212] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:11:17.629 [2024-11-20 11:25:23.273241] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:17.629 [2024-11-20 11:25:23.276589] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:17.629 [2024-11-20 11:25:23.277048] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:17.629 [2024-11-20 11:25:23.277090] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:17.629 [2024-11-20 11:25:23.277306] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:17.629 00:11:17.629 [2024-11-20 11:25:23.277351] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:19.005 00:11:19.005 real 0m2.393s 00:11:19.005 user 0m2.005s 00:11:19.005 sys 0m0.275s 00:11:19.005 11:25:24 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.005 11:25:24 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:19.005 ************************************ 00:11:19.005 END TEST bdev_hello_world 00:11:19.005 ************************************ 00:11:19.005 11:25:24 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:11:19.005 11:25:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.005 11:25:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.005 11:25:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:19.005 ************************************ 00:11:19.005 START TEST bdev_bounds 00:11:19.005 ************************************ 00:11:19.005 11:25:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:11:19.005 11:25:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63139 00:11:19.005 11:25:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:19.005 11:25:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:19.005 Process bdevio pid: 63139 00:11:19.005 11:25:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63139' 00:11:19.005 11:25:24 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63139 00:11:19.005 11:25:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63139 ']' 00:11:19.005 11:25:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.005 11:25:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.005 11:25:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.005 11:25:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.005 11:25:24 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:19.005 [2024-11-20 11:25:24.716636] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:11:19.005 [2024-11-20 11:25:24.716842] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63139 ] 00:11:19.263 [2024-11-20 11:25:24.912868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:19.621 [2024-11-20 11:25:25.044129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.621 [2024-11-20 11:25:25.044241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.621 [2024-11-20 11:25:25.044277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.187 11:25:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.187 11:25:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:11:20.187 11:25:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:20.445 I/O targets: 00:11:20.445 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:20.445 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:11:20.445 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:11:20.445 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:20.446 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:20.446 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:20.446 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:20.446 00:11:20.446 00:11:20.446 CUnit - A unit testing framework for C - Version 2.1-3 00:11:20.446 http://cunit.sourceforge.net/ 00:11:20.446 00:11:20.446 00:11:20.446 Suite: bdevio tests on: Nvme3n1 00:11:20.446 Test: blockdev write read block ...passed 00:11:20.446 Test: blockdev write zeroes read block ...passed 00:11:20.446 Test: blockdev write zeroes read no split ...passed 00:11:20.446 Test: blockdev write zeroes read split ...passed 00:11:20.446 Test: blockdev write zeroes read split partial ...passed 00:11:20.446 Test: blockdev reset ...[2024-11-20 11:25:26.107929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:11:20.446 [2024-11-20 11:25:26.112203] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:11:20.446 passed 00:11:20.446 Test: blockdev write read 8 blocks ...passed 00:11:20.446 Test: blockdev write read size > 128k ...passed 00:11:20.446 Test: blockdev write read invalid size ...passed 00:11:20.446 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:20.446 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:20.446 Test: blockdev write read max offset ...passed 00:11:20.446 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:20.446 Test: blockdev writev readv 8 blocks ...passed 00:11:20.446 Test: blockdev writev readv 30 x 1block ...passed 00:11:20.446 Test: blockdev writev readv block ...passed 00:11:20.446 Test: blockdev writev readv size > 128k ...passed 00:11:20.446 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:20.446 Test: blockdev comparev and writev ...[2024-11-20 11:25:26.120171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5804000 len:0x1000 00:11:20.446 [2024-11-20 11:25:26.120233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:20.446 passed 00:11:20.446 Test: blockdev nvme passthru rw ...passed 00:11:20.446 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:25:26.120919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:20.446 [2024-11-20 11:25:26.120960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:20.446 passed 00:11:20.446 Test: blockdev nvme admin passthru ...passed 00:11:20.446 Test: blockdev copy ...passed 00:11:20.446 Suite: bdevio tests on: Nvme2n3 00:11:20.446 Test: blockdev write read block ...passed 00:11:20.446 Test: blockdev write zeroes read block ...passed 00:11:20.446 Test: blockdev write zeroes read no split ...passed 00:11:20.446 Test: blockdev write zeroes read split ...passed 00:11:20.704 Test: blockdev write zeroes read split partial ...passed 00:11:20.704 Test: blockdev reset ...[2024-11-20 11:25:26.210206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:20.704 [2024-11-20 11:25:26.215133] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:20.704 passed 00:11:20.704 Test: blockdev write read 8 blocks ...passed 00:11:20.704 Test: blockdev write read size > 128k ...passed 00:11:20.704 Test: blockdev write read invalid size ...passed 00:11:20.704 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:20.704 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:20.704 Test: blockdev write read max offset ...passed 00:11:20.704 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:20.704 Test: blockdev writev readv 8 blocks ...passed 00:11:20.704 Test: blockdev writev readv 30 x 1block ...passed 00:11:20.704 Test: blockdev writev readv block ...passed 00:11:20.704 Test: blockdev writev readv size > 128k ...passed 00:11:20.704 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:20.704 Test: blockdev comparev and writev ...[2024-11-20 11:25:26.222902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5802000 len:0x1000 00:11:20.704 [2024-11-20 11:25:26.222968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:20.704 passed 00:11:20.704 Test: blockdev nvme passthru rw ...passed 00:11:20.704 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:25:26.223817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:20.704 passed 00:11:20.704 Test: blockdev nvme admin passthru ...[2024-11-20 11:25:26.223863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:20.704 passed 00:11:20.704 Test: blockdev copy ...passed 00:11:20.704 Suite: bdevio tests on: Nvme2n2 00:11:20.704 Test: blockdev write read block ...passed 00:11:20.704 Test: blockdev write zeroes read block ...passed 00:11:20.704 Test: blockdev write zeroes read no split ...passed 00:11:20.704 Test: blockdev write zeroes read split ...passed 00:11:20.704 Test: blockdev write zeroes read split partial ...passed 00:11:20.704 Test: blockdev reset ...[2024-11-20 11:25:26.310355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:20.704 [2024-11-20 11:25:26.315182] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:20.704 passed 00:11:20.704 Test: blockdev write read 8 blocks ...passed 00:11:20.704 Test: blockdev write read size > 128k ...passed 00:11:20.704 Test: blockdev write read invalid size ...passed 00:11:20.704 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:20.704 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:20.704 Test: blockdev write read max offset ...passed 00:11:20.704 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:20.704 Test: blockdev writev readv 8 blocks ...passed 00:11:20.704 Test: blockdev writev readv 30 x 1block ...passed 00:11:20.704 Test: blockdev writev readv block ...passed 00:11:20.704 Test: blockdev writev readv size > 128k ...passed 00:11:20.704 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:20.704 Test: blockdev comparev and writev ...[2024-11-20 11:25:26.323090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c8638000 len:0x1000 00:11:20.704 [2024-11-20 11:25:26.323156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:20.704 passed 00:11:20.704 Test: blockdev nvme passthru rw ...passed 00:11:20.704 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:25:26.324004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:20.705 passed 00:11:20.705 Test: blockdev nvme admin passthru ...[2024-11-20 11:25:26.324046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:20.705 passed 00:11:20.705 Test: blockdev copy ...passed 00:11:20.705 Suite: bdevio tests on: Nvme2n1 00:11:20.705 Test: blockdev write read block ...passed 00:11:20.705 Test: blockdev write zeroes read block ...passed 00:11:20.705 Test: blockdev write zeroes read no split ...passed 00:11:20.705 Test: blockdev write zeroes read split ...passed 00:11:20.705 Test: blockdev write zeroes read split partial ...passed 00:11:20.705 Test: blockdev reset ...[2024-11-20 11:25:26.438184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:20.705 [2024-11-20 11:25:26.442976] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:20.705 passed 00:11:20.705 Test: blockdev write read 8 blocks ...passed 00:11:20.705 Test: blockdev write read size > 128k ...passed 00:11:20.705 Test: blockdev write read invalid size ...passed 00:11:20.705 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:20.705 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:20.705 Test: blockdev write read max offset ...passed 00:11:20.705 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:20.705 Test: blockdev writev readv 8 blocks ...passed 00:11:20.705 Test: blockdev writev readv 30 x 1block ...passed 00:11:20.705 Test: blockdev writev readv block ...passed 00:11:20.705 Test: blockdev writev readv size > 128k ...passed 00:11:20.705 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:20.705 Test: blockdev comparev and writev ...[2024-11-20 11:25:26.452765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c8634000 len:0x1000 00:11:20.705 passed 00:11:20.705 Test: blockdev nvme passthru rw ...[2024-11-20 11:25:26.452836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:20.705 passed 00:11:20.705 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:25:26.453564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:20.705 [2024-11-20 11:25:26.453604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:20.705 passed 00:11:20.705 Test: blockdev nvme admin passthru ...passed 00:11:20.705 Test: blockdev copy ...passed 00:11:20.705 Suite: bdevio tests on: Nvme1n1p2 00:11:20.705 Test: blockdev write read block ...passed 00:11:20.705 Test: blockdev write zeroes read block ...passed 00:11:20.963 Test: blockdev write zeroes read no split ...passed 00:11:20.963 Test: blockdev write zeroes read split ...passed 00:11:20.964 Test: blockdev write zeroes read split partial ...passed 00:11:20.964 Test: blockdev reset ...[2024-11-20 11:25:26.560466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:20.964 [2024-11-20 11:25:26.565076] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:11:20.964 passed 00:11:20.964 Test: blockdev write read 8 blocks ...passed 00:11:20.964 Test: blockdev write read size > 128k ...passed 00:11:20.964 Test: blockdev write read invalid size ...passed 00:11:20.964 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:20.964 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:20.964 Test: blockdev write read max offset ...passed 00:11:20.964 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:20.964 Test: blockdev writev readv 8 blocks ...passed 00:11:20.964 Test: blockdev writev readv 30 x 1block ...passed 00:11:20.964 Test: blockdev writev readv block ...passed 00:11:20.964 Test: blockdev writev readv size > 128k ...passed 00:11:20.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:20.964 Test: blockdev comparev and writev ...[2024-11-20 11:25:26.574341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c8630000 len:0x1000 00:11:20.964 [2024-11-20 11:25:26.574400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:20.964 passed 00:11:20.964 Test: blockdev nvme passthru rw ...passed 00:11:20.964 Test: blockdev nvme passthru vendor specific ...passed 00:11:20.964 Test: blockdev nvme admin passthru ...passed 00:11:20.964 Test: blockdev copy ...passed 00:11:20.964 Suite: bdevio tests on: Nvme1n1p1 00:11:20.964 Test: blockdev write read block ...passed 00:11:20.964 Test: blockdev write zeroes read block ...passed 00:11:20.964 Test: blockdev write zeroes read no split ...passed 00:11:20.964 Test: blockdev write zeroes read split ...passed 00:11:20.964 Test: blockdev write zeroes read split partial ...passed 00:11:20.964 Test: blockdev reset ...[2024-11-20 11:25:26.673079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:20.964 [2024-11-20 11:25:26.677899] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:11:20.964 passed 00:11:20.964 Test: blockdev write read 8 blocks ...passed 00:11:20.964 Test: blockdev write read size > 128k ...passed 00:11:20.964 Test: blockdev write read invalid size ...passed 00:11:20.964 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:20.964 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:20.964 Test: blockdev write read max offset ...passed 00:11:20.964 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:20.964 Test: blockdev writev readv 8 blocks ...passed 00:11:20.964 Test: blockdev writev readv 30 x 1block ...passed 00:11:20.964 Test: blockdev writev readv block ...passed 00:11:20.964 Test: blockdev writev readv size > 128k ...passed 00:11:20.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:20.964 Test: blockdev comparev and writev ...[2024-11-20 11:25:26.687236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b620e000 len:0x1000 00:11:20.964 [2024-11-20 11:25:26.687295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:20.964 passed 00:11:20.964 Test: blockdev nvme passthru rw ...passed 00:11:20.964 Test: blockdev nvme passthru vendor specific ...passed 00:11:20.964 Test: blockdev nvme admin passthru ...passed 00:11:20.964 Test: blockdev copy ...passed 00:11:20.964 Suite: bdevio tests on: Nvme0n1 00:11:20.964 Test: blockdev write read block ...passed 00:11:20.964 Test: blockdev write zeroes read block ...passed 00:11:20.964 Test: blockdev write zeroes read no split ...passed 00:11:21.223 Test: blockdev write zeroes read split ...passed 00:11:21.223 Test: blockdev write zeroes read split partial ...passed 00:11:21.223 Test: blockdev reset ...[2024-11-20 11:25:26.775973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:21.223 [2024-11-20 11:25:26.780154] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:11:21.223 passed 00:11:21.223 Test: blockdev write read 8 blocks ...passed 00:11:21.223 Test: blockdev write read size > 128k ...passed 00:11:21.223 Test: blockdev write read invalid size ...passed 00:11:21.223 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:21.223 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:21.223 Test: blockdev write read max offset ...passed 00:11:21.223 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:21.223 Test: blockdev writev readv 8 blocks ...passed 00:11:21.223 Test: blockdev writev readv 30 x 1block ...passed 00:11:21.223 Test: blockdev writev readv block ...passed 00:11:21.223 Test: blockdev writev readv size > 128k ...passed 00:11:21.223 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:21.223 Test: blockdev comparev and writev ...passed 00:11:21.223 Test: blockdev nvme passthru rw ...[2024-11-20 11:25:26.787124] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:11:21.223 separate metadata which is not supported yet. 00:11:21.223 passed 00:11:21.223 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:25:26.787817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:11:21.223 [2024-11-20 11:25:26.787876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:11:21.223 passed 00:11:21.223 Test: blockdev nvme admin passthru ...passed 00:11:21.223 Test: blockdev copy ...passed 00:11:21.223 00:11:21.223 Run Summary: Type Total Ran Passed Failed Inactive 00:11:21.223 suites 7 7 n/a 0 0 00:11:21.223 tests 161 161 161 0 0 00:11:21.223 asserts 1025 1025 1025 0 n/a 00:11:21.223 00:11:21.223 Elapsed time = 2.154 seconds 00:11:21.223 0 00:11:21.223 11:25:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63139 00:11:21.223 11:25:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63139 ']' 00:11:21.223 11:25:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63139 00:11:21.223 11:25:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:11:21.223 11:25:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.223 11:25:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63139 00:11:21.223 11:25:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.223 11:25:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.223 killing process with pid 63139 00:11:21.223 11:25:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63139' 00:11:21.223 11:25:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63139 00:11:21.223 11:25:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63139 00:11:22.597 11:25:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:11:22.597 00:11:22.597 real 0m3.445s 00:11:22.597 user 0m9.119s 00:11:22.597 sys 0m0.486s 00:11:22.597 11:25:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.597 11:25:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:22.597 ************************************ 00:11:22.597 END TEST bdev_bounds 00:11:22.597 ************************************ 00:11:22.597 11:25:28 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:22.597 11:25:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:22.597 11:25:28 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.597 11:25:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:22.597 ************************************ 00:11:22.597 START TEST bdev_nbd 00:11:22.597 ************************************ 00:11:22.597 11:25:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:22.597 11:25:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:11:22.597 11:25:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:11:22.597 11:25:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:22.597 11:25:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:22.597 11:25:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:22.597 11:25:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:11:22.597 11:25:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:11:22.597 11:25:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:11:22.597 11:25:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:22.597 11:25:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:11:22.597 11:25:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:11:22.597 11:25:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:22.597 11:25:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:11:22.597 11:25:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:22.597 11:25:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:11:22.598 11:25:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63204 00:11:22.598 11:25:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:22.598 11:25:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:22.598 11:25:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63204 /var/tmp/spdk-nbd.sock 00:11:22.598 11:25:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63204 ']' 00:11:22.598 11:25:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:22.598 11:25:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.598 11:25:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:22.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:22.598 11:25:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.598 11:25:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:22.598 [2024-11-20 11:25:28.229310] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:11:22.598 [2024-11-20 11:25:28.229518] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.856 [2024-11-20 11:25:28.432333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.114 [2024-11-20 11:25:28.655123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.681 11:25:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.681 11:25:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:11:23.681 11:25:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:23.681 11:25:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:23.681 11:25:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:23.681 11:25:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:23.681 11:25:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:23.681 11:25:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:23.681 11:25:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:23.682 11:25:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:23.682 11:25:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:23.682 11:25:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:23.940 11:25:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:23.940 11:25:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:23.940 11:25:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:11:24.198 11:25:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:24.198 11:25:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:24.198 11:25:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:24.198 11:25:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:24.198 11:25:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:24.198 11:25:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:24.198 11:25:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:24.198 11:25:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:24.198 11:25:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:24.198 11:25:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:24.198 11:25:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:24.198 11:25:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:24.198 1+0 records in 00:11:24.198 1+0 records out 00:11:24.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484343 s, 8.5 MB/s 00:11:24.198 11:25:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.198 11:25:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:24.198 11:25:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.198 11:25:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:24.198 11:25:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:24.198 11:25:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:24.198 11:25:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:24.198 11:25:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:11:24.457 11:25:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:24.457 11:25:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:24.457 11:25:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:24.457 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:24.457 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:24.457 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:24.457 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:24.457 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:24.458 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:24.458 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:24.458 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:24.458 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:24.458 1+0 records in 00:11:24.458 1+0 records out 00:11:24.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000707281 s, 5.8 MB/s 00:11:24.458 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.458 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:24.458 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:24.458 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:24.458 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:24.458 11:25:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:24.458 11:25:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:24.458 11:25:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:11:25.025 11:25:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:25.025 11:25:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:25.025 11:25:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:25.025 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:11:25.025 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:25.025 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:25.025 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:25.025 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:11:25.025 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:25.025 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:25.025 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:25.025 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.025 1+0 records in 00:11:25.025 1+0 records out 00:11:25.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631583 s, 6.5 MB/s 00:11:25.025 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.025 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:25.025 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.025 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:25.025 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:25.025 11:25:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:25.025 11:25:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:25.025 11:25:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:11:25.284 11:25:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:25.284 11:25:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:25.284 11:25:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:25.284 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:11:25.284 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:25.284 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:25.284 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:25.284 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:11:25.284 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:25.284 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:25.284 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:25.284 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.284 1+0 records in 00:11:25.284 1+0 records out 00:11:25.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623662 s, 6.6 MB/s 00:11:25.284 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.284 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:25.284 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.284 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:25.284 11:25:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:25.284 11:25:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:25.284 11:25:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:25.284 11:25:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:11:25.543 11:25:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:25.543 11:25:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:25.543 11:25:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:25.543 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:11:25.543 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:25.543 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:25.543 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:25.543 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:11:25.543 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:25.543 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:25.543 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:25.543 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.543 1+0 records in 00:11:25.543 1+0 records out 00:11:25.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000726658 s, 5.6 MB/s 00:11:25.543 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.543 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:25.543 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.802 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:25.802 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:25.802 11:25:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:25.802 11:25:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:25.802 11:25:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:11:26.060 11:25:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:26.060 11:25:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:26.060 11:25:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:26.060 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:11:26.060 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:26.060 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:26.060 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:26.060 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:11:26.060 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:26.060 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:26.060 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:26.060 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.060 1+0 records in 00:11:26.060 1+0 records out 00:11:26.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102664 s, 4.0 MB/s 00:11:26.060 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.060 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:26.060 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.060 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:26.060 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:26.061 11:25:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:26.061 11:25:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:26.061 11:25:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:11:26.319 11:25:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:26.319 11:25:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:26.319 11:25:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:26.319 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:11:26.319 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:26.319 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:26.319 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:26.319 11:25:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:11:26.319 11:25:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:26.319 11:25:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:26.319 11:25:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:26.319 11:25:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.319 1+0 records in 00:11:26.319 1+0 records out 00:11:26.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000787267 s, 5.2 MB/s 00:11:26.319 11:25:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.319 11:25:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:26.319 11:25:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.319 11:25:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:26.319 11:25:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:26.319 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:26.319 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:26.319 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:26.886 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:26.886 { 00:11:26.886 "nbd_device": "/dev/nbd0", 00:11:26.886 "bdev_name": "Nvme0n1" 00:11:26.886 }, 00:11:26.886 { 00:11:26.886 "nbd_device": "/dev/nbd1", 00:11:26.886 "bdev_name": "Nvme1n1p1" 00:11:26.886 }, 00:11:26.886 { 00:11:26.886 "nbd_device": "/dev/nbd2", 00:11:26.886 "bdev_name": "Nvme1n1p2" 00:11:26.886 }, 00:11:26.886 { 00:11:26.886 "nbd_device": "/dev/nbd3", 00:11:26.886 "bdev_name": "Nvme2n1" 00:11:26.886 }, 00:11:26.886 { 00:11:26.886 "nbd_device": "/dev/nbd4", 00:11:26.886 "bdev_name": "Nvme2n2" 00:11:26.886 }, 00:11:26.886 { 00:11:26.886 "nbd_device": "/dev/nbd5", 00:11:26.886 "bdev_name": "Nvme2n3" 00:11:26.886 }, 00:11:26.886 { 00:11:26.886 "nbd_device": "/dev/nbd6", 00:11:26.886 "bdev_name": "Nvme3n1" 00:11:26.886 } 00:11:26.886 ]' 00:11:26.886 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:26.886 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:26.886 { 00:11:26.886 "nbd_device": "/dev/nbd0", 00:11:26.886 "bdev_name": "Nvme0n1" 00:11:26.886 }, 00:11:26.886 { 00:11:26.886 "nbd_device": "/dev/nbd1", 00:11:26.886 "bdev_name": "Nvme1n1p1" 00:11:26.886 }, 00:11:26.886 { 00:11:26.886 "nbd_device": "/dev/nbd2", 00:11:26.886 "bdev_name": "Nvme1n1p2" 00:11:26.886 }, 00:11:26.886 { 00:11:26.886 "nbd_device": "/dev/nbd3", 00:11:26.886 "bdev_name": "Nvme2n1" 00:11:26.886 }, 00:11:26.886 { 00:11:26.886 "nbd_device": "/dev/nbd4", 00:11:26.886 "bdev_name": "Nvme2n2" 00:11:26.886 }, 00:11:26.886 { 00:11:26.886 "nbd_device": "/dev/nbd5", 00:11:26.886 "bdev_name": "Nvme2n3" 00:11:26.886 }, 00:11:26.886 { 00:11:26.886 "nbd_device": "/dev/nbd6", 00:11:26.886 "bdev_name": "Nvme3n1" 00:11:26.886 } 00:11:26.886 ]' 00:11:26.886 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:26.886 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:11:26.886 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.886 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:11:26.886 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:26.886 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:26.886 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:26.886 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:27.144 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:27.144 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:27.144 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:27.144 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:27.144 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:27.144 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:27.144 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:27.144 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:27.144 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:27.144 11:25:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:27.716 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:27.716 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:27.716 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:27.716 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:27.716 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:27.716 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:27.716 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:27.716 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:27.716 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:27.716 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:27.977 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:27.977 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:27.977 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:27.977 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:27.977 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:27.977 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:27.977 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:27.977 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:27.977 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:27.977 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:28.273 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:28.273 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:28.273 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:28.273 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:28.273 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:28.273 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:28.273 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:28.273 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:28.273 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:28.273 11:25:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:28.532 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:28.532 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:28.532 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:28.532 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:28.532 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:28.532 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:28.532 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:28.532 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:28.532 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:28.532 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:28.791 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:28.791 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:29.050 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:29.050 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.050 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.050 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:29.050 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:29.050 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.050 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.050 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:29.309 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:29.309 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:29.309 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:29.309 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.309 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.309 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:29.309 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:29.309 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.309 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:29.309 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.309 11:25:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:29.568 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:11:29.826 /dev/nbd0 00:11:30.085 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:30.085 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:30.085 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:30.085 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:30.085 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:30.085 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:30.085 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:30.085 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:30.085 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:30.085 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:30.085 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:30.085 1+0 records in 00:11:30.085 1+0 records out 00:11:30.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629834 s, 6.5 MB/s 00:11:30.085 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.085 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:30.085 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.085 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:30.085 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:30.085 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:30.085 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:30.085 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:11:30.343 /dev/nbd1 00:11:30.343 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:30.343 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:30.343 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:30.343 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:30.343 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:30.343 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:30.343 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:30.343 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:30.343 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:30.343 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:30.343 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:30.343 1+0 records in 00:11:30.343 1+0 records out 00:11:30.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497186 s, 8.2 MB/s 00:11:30.343 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.343 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:30.343 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.343 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:30.343 11:25:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:30.343 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:30.343 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:30.343 11:25:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:11:30.602 /dev/nbd10 00:11:30.602 11:25:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:30.602 11:25:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:30.602 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:11:30.602 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:30.602 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:30.602 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:30.602 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:11:30.602 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:30.602 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:30.602 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:30.602 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:30.602 1+0 records in 00:11:30.602 1+0 records out 00:11:30.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000698688 s, 5.9 MB/s 00:11:30.602 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.602 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:30.602 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.602 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:30.602 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:30.602 11:25:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:30.602 11:25:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:30.602 11:25:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:11:30.860 /dev/nbd11 00:11:30.860 11:25:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:30.860 11:25:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:30.860 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:11:30.860 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:30.860 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:30.860 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:30.860 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:11:30.860 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:30.860 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:30.860 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:30.860 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:30.860 1+0 records in 00:11:30.860 1+0 records out 00:11:30.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000695964 s, 5.9 MB/s 00:11:30.860 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.860 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:30.860 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.860 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:30.860 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:30.860 11:25:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:30.860 11:25:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:30.860 11:25:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:11:31.119 /dev/nbd12 00:11:31.377 11:25:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:31.377 11:25:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:31.377 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:11:31.377 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:31.377 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:31.377 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:31.377 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:11:31.377 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:31.377 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:31.377 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:31.377 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.377 1+0 records in 00:11:31.377 1+0 records out 00:11:31.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000722425 s, 5.7 MB/s 00:11:31.377 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.377 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:31.377 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.377 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:31.377 11:25:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:31.377 11:25:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:31.377 11:25:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:31.377 11:25:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:11:31.635 /dev/nbd13 00:11:31.635 11:25:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:31.635 11:25:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:31.635 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:11:31.635 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:31.635 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:31.636 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:31.636 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:11:31.636 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:31.636 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:31.636 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:31.636 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.636 1+0 records in 00:11:31.636 1+0 records out 00:11:31.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000775722 s, 5.3 MB/s 00:11:31.636 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.636 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:31.636 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.636 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:31.636 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:31.636 11:25:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:31.636 11:25:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:31.636 11:25:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:11:31.894 /dev/nbd14 00:11:31.894 11:25:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:31.894 11:25:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:31.894 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:11:31.894 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:31.894 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:31.894 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:31.894 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:11:31.894 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:31.894 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:31.894 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:31.894 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.894 1+0 records in 00:11:31.894 1+0 records out 00:11:31.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00146855 s, 2.8 MB/s 00:11:31.894 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.894 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:31.894 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.894 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:31.894 11:25:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:31.894 11:25:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:31.894 11:25:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:31.895 11:25:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:31.895 11:25:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:31.895 11:25:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:32.462 11:25:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:32.462 { 00:11:32.462 "nbd_device": "/dev/nbd0", 00:11:32.462 "bdev_name": "Nvme0n1" 00:11:32.462 }, 00:11:32.462 { 00:11:32.462 "nbd_device": "/dev/nbd1", 00:11:32.462 "bdev_name": "Nvme1n1p1" 00:11:32.462 }, 00:11:32.462 { 00:11:32.462 "nbd_device": "/dev/nbd10", 00:11:32.462 "bdev_name": "Nvme1n1p2" 00:11:32.462 }, 00:11:32.462 { 00:11:32.462 "nbd_device": "/dev/nbd11", 00:11:32.462 "bdev_name": "Nvme2n1" 00:11:32.462 }, 00:11:32.462 { 00:11:32.462 "nbd_device": "/dev/nbd12", 00:11:32.462 "bdev_name": "Nvme2n2" 00:11:32.462 }, 00:11:32.462 { 00:11:32.462 "nbd_device": "/dev/nbd13", 00:11:32.462 "bdev_name": "Nvme2n3" 00:11:32.462 }, 00:11:32.462 { 00:11:32.462 "nbd_device": "/dev/nbd14", 00:11:32.462 "bdev_name": "Nvme3n1" 00:11:32.462 } 00:11:32.462 ]' 00:11:32.462 11:25:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:32.462 11:25:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:32.462 { 00:11:32.462 "nbd_device": "/dev/nbd0", 00:11:32.462 "bdev_name": "Nvme0n1" 00:11:32.462 }, 00:11:32.462 { 00:11:32.462 "nbd_device": "/dev/nbd1", 00:11:32.462 "bdev_name": "Nvme1n1p1" 00:11:32.462 }, 00:11:32.462 { 00:11:32.462 "nbd_device": "/dev/nbd10", 00:11:32.462 "bdev_name": "Nvme1n1p2" 00:11:32.462 }, 00:11:32.462 { 00:11:32.462 "nbd_device": "/dev/nbd11", 00:11:32.462 "bdev_name": "Nvme2n1" 00:11:32.462 }, 00:11:32.462 { 00:11:32.462 "nbd_device": "/dev/nbd12", 00:11:32.462 "bdev_name": "Nvme2n2" 00:11:32.462 }, 00:11:32.462 { 00:11:32.462 "nbd_device": "/dev/nbd13", 00:11:32.462 "bdev_name": "Nvme2n3" 00:11:32.462 }, 00:11:32.462 { 00:11:32.462 "nbd_device": "/dev/nbd14", 00:11:32.462 "bdev_name": "Nvme3n1" 00:11:32.462 } 00:11:32.462 ]' 00:11:32.462 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:32.462 /dev/nbd1 00:11:32.462 /dev/nbd10 00:11:32.462 /dev/nbd11 00:11:32.462 /dev/nbd12 00:11:32.462 /dev/nbd13 00:11:32.462 /dev/nbd14' 00:11:32.462 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:32.462 /dev/nbd1 00:11:32.462 /dev/nbd10 00:11:32.462 /dev/nbd11 00:11:32.462 /dev/nbd12 00:11:32.462 /dev/nbd13 00:11:32.462 /dev/nbd14' 00:11:32.462 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:32.462 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:11:32.462 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:11:32.462 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:11:32.462 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:11:32.462 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:11:32.462 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:32.462 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:32.462 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:32.462 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:32.462 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:32.462 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:32.462 256+0 records in 00:11:32.462 256+0 records out 00:11:32.462 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00939436 s, 112 MB/s 00:11:32.462 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:32.462 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:32.462 256+0 records in 00:11:32.462 256+0 records out 00:11:32.462 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150423 s, 7.0 MB/s 00:11:32.462 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:32.462 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:32.720 256+0 records in 00:11:32.720 256+0 records out 00:11:32.720 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154798 s, 6.8 MB/s 00:11:32.720 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:32.720 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:32.979 256+0 records in 00:11:32.979 256+0 records out 00:11:32.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176396 s, 5.9 MB/s 00:11:32.979 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:32.979 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:32.979 256+0 records in 00:11:32.979 256+0 records out 00:11:32.979 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158659 s, 6.6 MB/s 00:11:32.979 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:32.979 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:33.238 256+0 records in 00:11:33.238 256+0 records out 00:11:33.238 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150668 s, 7.0 MB/s 00:11:33.238 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:33.239 11:25:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:33.497 256+0 records in 00:11:33.497 256+0 records out 00:11:33.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149766 s, 7.0 MB/s 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:33.497 256+0 records in 00:11:33.497 256+0 records out 00:11:33.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151732 s, 6.9 MB/s 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:33.497 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:33.755 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:33.755 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:33.755 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:33.755 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:33.755 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:33.756 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:33.756 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.756 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:34.014 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:34.014 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:34.014 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:34.014 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.014 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.014 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:34.014 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.014 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.014 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.014 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:34.273 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:34.273 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:34.273 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:34.273 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.273 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.273 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:34.273 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.273 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.274 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.274 11:25:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:34.841 11:25:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:34.841 11:25:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:34.841 11:25:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:34.841 11:25:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.841 11:25:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.841 11:25:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:34.841 11:25:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.841 11:25:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.841 11:25:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.841 11:25:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:35.100 11:25:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:35.100 11:25:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:35.100 11:25:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:35.100 11:25:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.100 11:25:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.100 11:25:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:35.100 11:25:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.100 11:25:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.100 11:25:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.100 11:25:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:35.359 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:35.359 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:35.359 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:35.359 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.359 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.359 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:35.359 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.359 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.359 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.359 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:35.617 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:35.875 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:35.875 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:35.875 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.875 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.875 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:35.875 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.875 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.875 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.875 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:36.136 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:36.136 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:36.136 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:36.136 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.136 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.136 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:36.136 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:36.136 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.136 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:36.137 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:36.137 11:25:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:36.396 11:25:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:36.396 11:25:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:36.396 11:25:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:36.396 11:25:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:36.396 11:25:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:36.396 11:25:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:36.396 11:25:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:36.396 11:25:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:36.396 11:25:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:36.396 11:25:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:36.396 11:25:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:36.396 11:25:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:36.396 11:25:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:36.396 11:25:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:36.396 11:25:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:11:36.396 11:25:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:36.963 malloc_lvol_verify 00:11:36.963 11:25:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:37.223 24b1204a-9951-4811-a094-21ef5067afa8 00:11:37.223 11:25:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:37.484 48ba151a-76c1-4e9e-8298-a22793bf0e49 00:11:37.747 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:38.013 /dev/nbd0 00:11:38.013 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:11:38.013 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:11:38.013 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:11:38.013 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:11:38.013 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:11:38.013 mke2fs 1.47.0 (5-Feb-2023) 00:11:38.013 Discarding device blocks: 0/4096 done 00:11:38.013 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:38.013 00:11:38.013 Allocating group tables: 0/1 done 00:11:38.013 Writing inode tables: 0/1 done 00:11:38.013 Creating journal (1024 blocks): done 00:11:38.013 Writing superblocks and filesystem accounting information: 0/1 done 00:11:38.013 00:11:38.013 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:38.013 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:38.013 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:38.013 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:38.013 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:38.013 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:38.013 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:38.281 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:38.281 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:38.281 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:38.281 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:38.281 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:38.281 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:38.281 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:38.281 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:38.281 11:25:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63204 00:11:38.281 11:25:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63204 ']' 00:11:38.281 11:25:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63204 00:11:38.281 11:25:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:11:38.281 11:25:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.281 11:25:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63204 00:11:38.281 killing process with pid 63204 00:11:38.281 11:25:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.281 11:25:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.281 11:25:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63204' 00:11:38.281 11:25:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63204 00:11:38.282 11:25:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63204 00:11:40.856 ************************************ 00:11:40.856 END TEST bdev_nbd 00:11:40.856 ************************************ 00:11:40.856 11:25:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:11:40.856 00:11:40.856 real 0m17.938s 00:11:40.856 user 0m24.353s 00:11:40.856 sys 0m6.933s 00:11:40.856 11:25:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.856 11:25:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:40.856 11:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:11:40.856 11:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:11:40.856 11:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:11:40.856 skipping fio tests on NVMe due to multi-ns failures. 00:11:40.856 11:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:11:40.856 11:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:40.856 11:25:46 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:40.856 11:25:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:40.856 11:25:46 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.856 11:25:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:40.856 ************************************ 00:11:40.856 START TEST bdev_verify 00:11:40.856 ************************************ 00:11:40.856 11:25:46 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:40.856 [2024-11-20 11:25:46.213810] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:11:40.856 [2024-11-20 11:25:46.213990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63696 ] 00:11:40.856 [2024-11-20 11:25:46.409901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:40.856 [2024-11-20 11:25:46.546175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.856 [2024-11-20 11:25:46.546206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.791 Running I/O for 5 seconds... 00:11:44.104 20928.00 IOPS, 81.75 MiB/s [2024-11-20T11:25:50.801Z] 19808.00 IOPS, 77.38 MiB/s [2024-11-20T11:25:51.753Z] 19370.67 IOPS, 75.67 MiB/s [2024-11-20T11:25:52.689Z] 19072.00 IOPS, 74.50 MiB/s [2024-11-20T11:25:52.689Z] 18675.20 IOPS, 72.95 MiB/s 00:11:46.927 Latency(us) 00:11:46.927 [2024-11-20T11:25:52.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.927 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.927 Verification LBA range: start 0x0 length 0xbd0bd 00:11:46.927 Nvme0n1 : 5.12 1324.00 5.17 0.00 0.00 96463.07 22219.82 87381.33 00:11:46.927 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.927 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:11:46.927 Nvme0n1 : 5.11 1301.68 5.08 0.00 0.00 98115.02 19848.05 104857.60 00:11:46.927 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.927 Verification LBA range: start 0x0 length 0x4ff80 00:11:46.927 Nvme1n1p1 : 5.13 1323.31 5.17 0.00 0.00 96393.21 20846.69 88379.98 00:11:46.927 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.927 Verification LBA range: start 0x4ff80 length 0x4ff80 00:11:46.927 Nvme1n1p1 : 5.12 1301.18 5.08 0.00 0.00 97976.08 21720.50 102860.31 00:11:46.927 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.927 Verification LBA range: start 0x0 length 0x4ff7f 00:11:46.927 Nvme1n1p2 : 5.13 1322.74 5.17 0.00 0.00 96134.04 21221.18 88879.30 00:11:46.927 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.927 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:11:46.927 Nvme1n1p2 : 5.12 1300.77 5.08 0.00 0.00 97778.54 21720.50 99365.06 00:11:46.927 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.927 Verification LBA range: start 0x0 length 0x80000 00:11:46.927 Nvme2n1 : 5.13 1321.69 5.16 0.00 0.00 95993.36 23842.62 86382.69 00:11:46.927 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.928 Verification LBA range: start 0x80000 length 0x80000 00:11:46.928 Nvme2n1 : 5.12 1300.42 5.08 0.00 0.00 97622.96 21346.01 95869.81 00:11:46.928 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.928 Verification LBA range: start 0x0 length 0x80000 00:11:46.928 Nvme2n2 : 5.13 1321.23 5.16 0.00 0.00 95875.78 23218.47 85384.05 00:11:46.928 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.928 Verification LBA range: start 0x80000 length 0x80000 00:11:46.928 Nvme2n2 : 5.12 1299.98 5.08 0.00 0.00 97503.32 20472.20 96868.45 00:11:46.928 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.928 Verification LBA range: start 0x0 length 0x80000 00:11:46.928 Nvme2n3 : 5.14 1320.76 5.16 0.00 0.00 95759.97 17351.44 87880.66 00:11:46.928 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.928 Verification LBA range: start 0x80000 length 0x80000 00:11:46.928 Nvme2n3 : 5.12 1299.56 5.08 0.00 0.00 97390.81 19848.05 98366.42 00:11:46.928 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:46.928 Verification LBA range: start 0x0 length 0x20000 00:11:46.928 Nvme3n1 : 5.14 1320.26 5.16 0.00 0.00 95647.02 12358.22 89378.62 00:11:46.928 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:46.928 Verification LBA range: start 0x20000 length 0x20000 00:11:46.928 Nvme3n1 : 5.12 1299.15 5.07 0.00 0.00 97272.08 14293.09 102860.31 00:11:46.928 [2024-11-20T11:25:52.690Z] =================================================================================================================== 00:11:46.928 [2024-11-20T11:25:52.690Z] Total : 18356.71 71.71 0.00 0.00 96844.05 12358.22 104857.60 00:11:48.833 00:11:48.833 real 0m8.038s 00:11:48.833 user 0m14.766s 00:11:48.833 sys 0m0.331s 00:11:48.833 11:25:54 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.833 11:25:54 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:48.833 ************************************ 00:11:48.833 END TEST bdev_verify 00:11:48.833 ************************************ 00:11:48.833 11:25:54 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:48.833 11:25:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:48.833 11:25:54 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.833 11:25:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:48.833 ************************************ 00:11:48.833 START TEST bdev_verify_big_io 00:11:48.833 ************************************ 00:11:48.833 11:25:54 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:48.833 [2024-11-20 11:25:54.274217] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:11:48.833 [2024-11-20 11:25:54.274365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63801 ] 00:11:48.833 [2024-11-20 11:25:54.453127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:48.833 [2024-11-20 11:25:54.581083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.833 [2024-11-20 11:25:54.581082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.771 Running I/O for 5 seconds... 00:11:54.244 528.00 IOPS, 33.00 MiB/s [2024-11-20T11:26:00.945Z] 1686.50 IOPS, 105.41 MiB/s [2024-11-20T11:26:01.880Z] 1940.67 IOPS, 121.29 MiB/s [2024-11-20T11:26:01.880Z] 2323.00 IOPS, 145.19 MiB/s 00:11:56.118 Latency(us) 00:11:56.118 [2024-11-20T11:26:01.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.118 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:56.118 Verification LBA range: start 0x0 length 0xbd0b 00:11:56.118 Nvme0n1 : 5.90 113.80 7.11 0.00 0.00 1063553.89 21595.67 1150437.67 00:11:56.118 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:56.118 Verification LBA range: start 0xbd0b length 0xbd0b 00:11:56.118 Nvme0n1 : 5.90 103.14 6.45 0.00 0.00 1177118.59 12982.37 1829515.46 00:11:56.118 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:56.118 Verification LBA range: start 0x0 length 0x4ff8 00:11:56.118 Nvme1n1p1 : 5.90 105.29 6.58 0.00 0.00 1098736.14 76396.25 1781580.56 00:11:56.118 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:56.118 Verification LBA range: start 0x4ff8 length 0x4ff8 00:11:56.119 Nvme1n1p1 : 5.79 107.80 6.74 0.00 0.00 1099295.18 84884.72 1446036.24 00:11:56.119 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:56.119 Verification LBA range: start 0x0 length 0x4ff7 00:11:56.119 Nvme1n1p2 : 5.90 109.27 6.83 0.00 0.00 1052898.04 139810.13 1821526.31 00:11:56.119 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:56.119 Verification LBA range: start 0x4ff7 length 0x4ff7 00:11:56.119 Nvme1n1p2 : 5.96 111.87 6.99 0.00 0.00 1034828.55 101861.67 1470003.69 00:11:56.119 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:56.119 Verification LBA range: start 0x0 length 0x8000 00:11:56.119 Nvme2n1 : 6.03 123.45 7.72 0.00 0.00 919850.28 68906.42 978670.93 00:11:56.119 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:56.119 Verification LBA range: start 0x8000 length 0x8000 00:11:56.119 Nvme2n1 : 5.96 115.18 7.20 0.00 0.00 984123.50 64911.85 1493971.14 00:11:56.119 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:56.119 Verification LBA range: start 0x0 length 0x8000 00:11:56.119 Nvme2n2 : 6.09 126.96 7.94 0.00 0.00 869672.93 59419.31 1030600.41 00:11:56.119 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:56.119 Verification LBA range: start 0x8000 length 0x8000 00:11:56.119 Nvme2n2 : 6.02 119.88 7.49 0.00 0.00 917793.43 19598.38 1701689.05 00:11:56.119 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:56.119 Verification LBA range: start 0x0 length 0x8000 00:11:56.119 Nvme2n3 : 6.10 130.92 8.18 0.00 0.00 824662.51 54176.43 1046578.71 00:11:56.119 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:56.119 Verification LBA range: start 0x8000 length 0x8000 00:11:56.119 Nvme2n3 : 6.03 123.63 7.73 0.00 0.00 866566.46 36700.16 1725656.50 00:11:56.119 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:56.119 Verification LBA range: start 0x0 length 0x2000 00:11:56.119 Nvme3n1 : 6.13 141.75 8.86 0.00 0.00 742338.90 9237.46 1038589.56 00:11:56.119 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:56.119 Verification LBA range: start 0x2000 length 0x2000 00:11:56.119 Nvme3n1 : 6.09 139.44 8.72 0.00 0.00 749568.65 7240.17 1757613.10 00:11:56.119 [2024-11-20T11:26:01.881Z] =================================================================================================================== 00:11:56.119 [2024-11-20T11:26:01.881Z] Total : 1672.37 104.52 0.00 0.00 942579.31 7240.17 1829515.46 00:11:58.654 00:11:58.654 real 0m9.722s 00:11:58.654 user 0m18.135s 00:11:58.654 sys 0m0.363s 00:11:58.654 11:26:03 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.654 11:26:03 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:58.654 ************************************ 00:11:58.654 END TEST bdev_verify_big_io 00:11:58.654 ************************************ 00:11:58.654 11:26:03 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:58.654 11:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:58.654 11:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.654 11:26:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:58.654 ************************************ 00:11:58.654 START TEST bdev_write_zeroes 00:11:58.654 ************************************ 00:11:58.654 11:26:03 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:58.654 [2024-11-20 11:26:04.081831] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:11:58.654 [2024-11-20 11:26:04.082051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63922 ] 00:11:58.654 [2024-11-20 11:26:04.286417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.913 [2024-11-20 11:26:04.462028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.480 Running I/O for 1 seconds... 00:12:00.852 46080.00 IOPS, 180.00 MiB/s 00:12:00.852 Latency(us) 00:12:00.852 [2024-11-20T11:26:06.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.852 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:00.852 Nvme0n1 : 1.03 6588.72 25.74 0.00 0.00 19337.14 13232.03 32455.92 00:12:00.852 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:00.852 Nvme1n1p1 : 1.03 6577.98 25.70 0.00 0.00 19335.59 12857.54 34702.87 00:12:00.852 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:00.852 Nvme1n1p2 : 1.03 6567.22 25.65 0.00 0.00 19288.00 12919.95 33204.91 00:12:00.852 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:00.852 Nvme2n1 : 1.03 6557.42 25.61 0.00 0.00 19193.79 13356.86 28211.69 00:12:00.852 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:00.852 Nvme2n2 : 1.04 6599.43 25.78 0.00 0.00 19091.86 11546.82 27213.04 00:12:00.852 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:00.852 Nvme2n3 : 1.04 6589.65 25.74 0.00 0.00 19062.00 10610.59 27587.54 00:12:00.852 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:00.852 Nvme3n1 : 1.04 6518.36 25.46 0.00 0.00 19219.67 11609.23 35701.52 00:12:00.852 [2024-11-20T11:26:06.615Z] =================================================================================================================== 00:12:00.853 [2024-11-20T11:26:06.615Z] Total : 45998.77 179.68 0.00 0.00 19217.91 10610.59 35701.52 00:12:02.230 00:12:02.230 real 0m3.749s 00:12:02.230 user 0m3.321s 00:12:02.230 sys 0m0.300s 00:12:02.230 11:26:07 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.230 11:26:07 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:02.230 ************************************ 00:12:02.230 END TEST bdev_write_zeroes 00:12:02.230 ************************************ 00:12:02.230 11:26:07 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:02.230 11:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:12:02.230 11:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.230 11:26:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:02.230 ************************************ 00:12:02.230 START TEST bdev_json_nonenclosed 00:12:02.230 ************************************ 00:12:02.230 11:26:07 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:02.230 [2024-11-20 11:26:07.904629] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:12:02.230 [2024-11-20 11:26:07.904805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63986 ] 00:12:02.488 [2024-11-20 11:26:08.106301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.747 [2024-11-20 11:26:08.287962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.747 [2024-11-20 11:26:08.288097] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:02.747 [2024-11-20 11:26:08.288130] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:02.747 [2024-11-20 11:26:08.288149] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:03.003 00:12:03.003 real 0m0.831s 00:12:03.004 user 0m0.551s 00:12:03.004 sys 0m0.173s 00:12:03.004 11:26:08 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.004 11:26:08 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:03.004 ************************************ 00:12:03.004 END TEST bdev_json_nonenclosed 00:12:03.004 ************************************ 00:12:03.004 11:26:08 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:03.004 11:26:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:12:03.004 11:26:08 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.004 11:26:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:03.004 ************************************ 00:12:03.004 START TEST bdev_json_nonarray 00:12:03.004 ************************************ 00:12:03.004 11:26:08 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:03.262 [2024-11-20 11:26:08.788972] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:12:03.262 [2024-11-20 11:26:08.789159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64012 ] 00:12:03.262 [2024-11-20 11:26:08.989036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.521 [2024-11-20 11:26:09.181409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.521 [2024-11-20 11:26:09.181577] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:03.521 [2024-11-20 11:26:09.181615] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:03.521 [2024-11-20 11:26:09.181634] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:03.779 00:12:03.779 real 0m0.811s 00:12:03.779 user 0m0.542s 00:12:03.779 sys 0m0.161s 00:12:03.779 11:26:09 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.779 11:26:09 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:03.779 ************************************ 00:12:03.779 END TEST bdev_json_nonarray 00:12:03.779 ************************************ 00:12:03.779 11:26:09 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:12:03.779 11:26:09 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:12:03.779 11:26:09 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:12:03.779 11:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:03.779 11:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.779 11:26:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:03.779 ************************************ 00:12:03.779 START TEST bdev_gpt_uuid 00:12:03.779 ************************************ 00:12:03.779 11:26:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:12:03.779 11:26:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:12:03.779 11:26:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:12:03.779 11:26:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=64041 00:12:03.779 11:26:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:03.779 11:26:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 64041 00:12:03.779 11:26:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:03.779 11:26:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 64041 ']' 00:12:03.779 11:26:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.779 11:26:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.037 11:26:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.037 11:26:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.037 11:26:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:04.037 [2024-11-20 11:26:09.705048] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:12:04.037 [2024-11-20 11:26:09.705874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64041 ] 00:12:04.295 [2024-11-20 11:26:09.888230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.295 [2024-11-20 11:26:10.034453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.670 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.670 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:12:05.670 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:05.670 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.670 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:05.670 Some configs were skipped because the RPC state that can call them passed over. 00:12:05.670 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.670 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:12:05.670 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.670 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:05.670 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.670 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:12:05.670 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.670 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:05.670 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.670 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:12:05.670 { 00:12:05.670 "name": "Nvme1n1p1", 00:12:05.670 "aliases": [ 00:12:05.670 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:12:05.670 ], 00:12:05.670 "product_name": "GPT Disk", 00:12:05.670 "block_size": 4096, 00:12:05.670 "num_blocks": 655104, 00:12:05.670 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:12:05.670 "assigned_rate_limits": { 00:12:05.670 "rw_ios_per_sec": 0, 00:12:05.670 "rw_mbytes_per_sec": 0, 00:12:05.670 "r_mbytes_per_sec": 0, 00:12:05.670 "w_mbytes_per_sec": 0 00:12:05.670 }, 00:12:05.670 "claimed": false, 00:12:05.670 "zoned": false, 00:12:05.670 "supported_io_types": { 00:12:05.671 "read": true, 00:12:05.671 "write": true, 00:12:05.671 "unmap": true, 00:12:05.671 "flush": true, 00:12:05.671 "reset": true, 00:12:05.671 "nvme_admin": false, 00:12:05.671 "nvme_io": false, 00:12:05.671 "nvme_io_md": false, 00:12:05.671 "write_zeroes": true, 00:12:05.671 "zcopy": false, 00:12:05.671 "get_zone_info": false, 00:12:05.671 "zone_management": false, 00:12:05.671 "zone_append": false, 00:12:05.671 "compare": true, 00:12:05.671 "compare_and_write": false, 00:12:05.671 "abort": true, 00:12:05.671 "seek_hole": false, 00:12:05.671 "seek_data": false, 00:12:05.671 "copy": true, 00:12:05.671 "nvme_iov_md": false 00:12:05.671 }, 00:12:05.671 "driver_specific": { 00:12:05.671 "gpt": { 00:12:05.671 "base_bdev": "Nvme1n1", 00:12:05.671 "offset_blocks": 256, 00:12:05.671 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:12:05.671 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:12:05.671 "partition_name": "SPDK_TEST_first" 00:12:05.671 } 00:12:05.671 } 00:12:05.671 } 00:12:05.671 ]' 00:12:05.671 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:12:05.929 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:12:05.929 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:12:05.930 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:12:05.930 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:12:05.930 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:12:05.930 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:12:05.930 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.930 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:05.930 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.930 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:12:05.930 { 00:12:05.930 "name": "Nvme1n1p2", 00:12:05.930 "aliases": [ 00:12:05.930 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:12:05.930 ], 00:12:05.930 "product_name": "GPT Disk", 00:12:05.930 "block_size": 4096, 00:12:05.930 "num_blocks": 655103, 00:12:05.930 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:12:05.930 "assigned_rate_limits": { 00:12:05.930 "rw_ios_per_sec": 0, 00:12:05.930 "rw_mbytes_per_sec": 0, 00:12:05.930 "r_mbytes_per_sec": 0, 00:12:05.930 "w_mbytes_per_sec": 0 00:12:05.930 }, 00:12:05.930 "claimed": false, 00:12:05.930 "zoned": false, 00:12:05.930 "supported_io_types": { 00:12:05.930 "read": true, 00:12:05.930 "write": true, 00:12:05.930 "unmap": true, 00:12:05.930 "flush": true, 00:12:05.930 "reset": true, 00:12:05.930 "nvme_admin": false, 00:12:05.930 "nvme_io": false, 00:12:05.930 "nvme_io_md": false, 00:12:05.930 "write_zeroes": true, 00:12:05.930 "zcopy": false, 00:12:05.930 "get_zone_info": false, 00:12:05.930 "zone_management": false, 00:12:05.930 "zone_append": false, 00:12:05.930 "compare": true, 00:12:05.930 "compare_and_write": false, 00:12:05.930 "abort": true, 00:12:05.930 "seek_hole": false, 00:12:05.930 "seek_data": false, 00:12:05.930 "copy": true, 00:12:05.930 "nvme_iov_md": false 00:12:05.930 }, 00:12:05.930 "driver_specific": { 00:12:05.930 "gpt": { 00:12:05.930 "base_bdev": "Nvme1n1", 00:12:05.930 "offset_blocks": 655360, 00:12:05.930 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:12:05.930 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:12:05.930 "partition_name": "SPDK_TEST_second" 00:12:05.930 } 00:12:05.930 } 00:12:05.930 } 00:12:05.930 ]' 00:12:05.930 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:12:05.930 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:12:05.930 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:12:05.930 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:12:05.930 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:12:06.189 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:12:06.189 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 64041 00:12:06.189 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 64041 ']' 00:12:06.189 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 64041 00:12:06.189 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:12:06.189 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.189 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64041 00:12:06.189 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.189 killing process with pid 64041 00:12:06.189 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.189 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64041' 00:12:06.189 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 64041 00:12:06.189 11:26:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 64041 00:12:08.721 00:12:08.721 real 0m4.926s 00:12:08.721 user 0m5.102s 00:12:08.721 sys 0m0.632s 00:12:08.721 11:26:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.721 ************************************ 00:12:08.721 END TEST bdev_gpt_uuid 00:12:08.721 ************************************ 00:12:08.721 11:26:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:08.979 11:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:12:08.979 11:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:12:08.979 11:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:12:08.979 11:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:08.980 11:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:08.980 11:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:12:08.980 11:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:12:08.980 11:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:12:08.980 11:26:14 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:09.258 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:09.517 Waiting for block devices as requested 00:12:09.517 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:09.776 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:09.776 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:10.036 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:15.305 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:15.305 11:26:20 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:12:15.305 11:26:20 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:12:15.305 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:15.305 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:12:15.305 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:15.305 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:15.305 11:26:20 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:12:15.305 00:12:15.306 real 1m13.531s 00:12:15.306 user 1m34.130s 00:12:15.306 sys 0m13.634s 00:12:15.306 ************************************ 00:12:15.306 END TEST blockdev_nvme_gpt 00:12:15.306 ************************************ 00:12:15.306 11:26:20 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.306 11:26:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:15.306 11:26:20 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:12:15.306 11:26:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:15.306 11:26:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.306 11:26:20 -- common/autotest_common.sh@10 -- # set +x 00:12:15.306 ************************************ 00:12:15.306 START TEST nvme 00:12:15.306 ************************************ 00:12:15.306 11:26:20 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:12:15.306 * Looking for test storage... 00:12:15.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:15.564 11:26:21 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:15.564 11:26:21 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:15.564 11:26:21 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:15.564 11:26:21 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:15.564 11:26:21 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.564 11:26:21 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.564 11:26:21 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.564 11:26:21 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.564 11:26:21 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.564 11:26:21 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.564 11:26:21 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.564 11:26:21 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.564 11:26:21 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.564 11:26:21 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.564 11:26:21 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.564 11:26:21 nvme -- scripts/common.sh@344 -- # case "$op" in 00:12:15.564 11:26:21 nvme -- scripts/common.sh@345 -- # : 1 00:12:15.564 11:26:21 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.564 11:26:21 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.564 11:26:21 nvme -- scripts/common.sh@365 -- # decimal 1 00:12:15.564 11:26:21 nvme -- scripts/common.sh@353 -- # local d=1 00:12:15.564 11:26:21 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.564 11:26:21 nvme -- scripts/common.sh@355 -- # echo 1 00:12:15.564 11:26:21 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.564 11:26:21 nvme -- scripts/common.sh@366 -- # decimal 2 00:12:15.564 11:26:21 nvme -- scripts/common.sh@353 -- # local d=2 00:12:15.564 11:26:21 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.564 11:26:21 nvme -- scripts/common.sh@355 -- # echo 2 00:12:15.564 11:26:21 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.564 11:26:21 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.564 11:26:21 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.564 11:26:21 nvme -- scripts/common.sh@368 -- # return 0 00:12:15.564 11:26:21 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.564 11:26:21 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:15.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.564 --rc genhtml_branch_coverage=1 00:12:15.564 --rc genhtml_function_coverage=1 00:12:15.564 --rc genhtml_legend=1 00:12:15.564 --rc geninfo_all_blocks=1 00:12:15.564 --rc geninfo_unexecuted_blocks=1 00:12:15.564 00:12:15.564 ' 00:12:15.564 11:26:21 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:15.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.564 --rc genhtml_branch_coverage=1 00:12:15.564 --rc genhtml_function_coverage=1 00:12:15.564 --rc genhtml_legend=1 00:12:15.564 --rc geninfo_all_blocks=1 00:12:15.564 --rc geninfo_unexecuted_blocks=1 00:12:15.564 00:12:15.564 ' 00:12:15.564 11:26:21 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:15.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.564 --rc genhtml_branch_coverage=1 00:12:15.564 --rc genhtml_function_coverage=1 00:12:15.564 --rc genhtml_legend=1 00:12:15.564 --rc geninfo_all_blocks=1 00:12:15.564 --rc geninfo_unexecuted_blocks=1 00:12:15.564 00:12:15.564 ' 00:12:15.564 11:26:21 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:15.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.564 --rc genhtml_branch_coverage=1 00:12:15.564 --rc genhtml_function_coverage=1 00:12:15.564 --rc genhtml_legend=1 00:12:15.564 --rc geninfo_all_blocks=1 00:12:15.564 --rc geninfo_unexecuted_blocks=1 00:12:15.564 00:12:15.564 ' 00:12:15.564 11:26:21 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:16.131 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:17.069 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:17.069 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:17.069 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:17.069 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:17.069 11:26:22 nvme -- nvme/nvme.sh@79 -- # uname 00:12:17.069 11:26:22 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:12:17.069 11:26:22 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:12:17.069 11:26:22 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:12:17.069 11:26:22 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:12:17.069 11:26:22 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:12:17.069 11:26:22 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:12:17.069 11:26:22 nvme -- common/autotest_common.sh@1075 -- # stubpid=64703 00:12:17.069 11:26:22 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:12:17.069 Waiting for stub to ready for secondary processes... 00:12:17.069 11:26:22 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:12:17.069 11:26:22 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:17.069 11:26:22 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64703 ]] 00:12:17.069 11:26:22 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:12:17.069 [2024-11-20 11:26:22.768565] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:12:17.069 [2024-11-20 11:26:22.768753] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:12:18.027 11:26:23 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:18.027 11:26:23 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64703 ]] 00:12:18.027 11:26:23 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:12:18.285 [2024-11-20 11:26:23.879249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:18.542 [2024-11-20 11:26:24.046303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.542 [2024-11-20 11:26:24.046495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.542 [2024-11-20 11:26:24.046502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.542 [2024-11-20 11:26:24.066948] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:12:18.542 [2024-11-20 11:26:24.067003] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:18.542 [2024-11-20 11:26:24.080812] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:12:18.542 [2024-11-20 11:26:24.082204] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:12:18.542 [2024-11-20 11:26:24.087855] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:18.542 [2024-11-20 11:26:24.088273] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:12:18.542 [2024-11-20 11:26:24.088440] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:12:18.542 [2024-11-20 11:26:24.091649] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:18.542 [2024-11-20 11:26:24.091856] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:12:18.542 [2024-11-20 11:26:24.091939] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:12:18.542 [2024-11-20 11:26:24.094745] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:18.542 [2024-11-20 11:26:24.094947] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:12:18.542 [2024-11-20 11:26:24.095027] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:12:18.542 [2024-11-20 11:26:24.095080] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:12:18.542 [2024-11-20 11:26:24.095142] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:12:19.107 11:26:24 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:19.107 done. 00:12:19.107 11:26:24 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:12:19.107 11:26:24 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:12:19.107 11:26:24 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:12:19.107 11:26:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.107 11:26:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:19.107 ************************************ 00:12:19.107 START TEST nvme_reset 00:12:19.107 ************************************ 00:12:19.107 11:26:24 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:12:19.364 Initializing NVMe Controllers 00:12:19.364 Skipping QEMU NVMe SSD at 0000:00:10.0 00:12:19.364 Skipping QEMU NVMe SSD at 0000:00:11.0 00:12:19.364 Skipping QEMU NVMe SSD at 0000:00:13.0 00:12:19.364 Skipping QEMU NVMe SSD at 0000:00:12.0 00:12:19.364 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:12:19.364 00:12:19.364 real 0m0.319s 00:12:19.364 user 0m0.114s 00:12:19.364 sys 0m0.151s 00:12:19.364 11:26:25 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.364 11:26:25 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:12:19.364 ************************************ 00:12:19.364 END TEST nvme_reset 00:12:19.364 ************************************ 00:12:19.364 11:26:25 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:12:19.364 11:26:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:19.364 11:26:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.364 11:26:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:19.364 ************************************ 00:12:19.364 START TEST nvme_identify 00:12:19.364 ************************************ 00:12:19.364 11:26:25 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:12:19.364 11:26:25 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:12:19.364 11:26:25 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:12:19.364 11:26:25 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:12:19.364 11:26:25 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:12:19.364 11:26:25 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:19.365 11:26:25 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:12:19.365 11:26:25 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:19.365 11:26:25 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:19.365 11:26:25 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:19.622 11:26:25 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:19.622 11:26:25 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:19.622 11:26:25 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:12:19.882 [2024-11-20 11:26:25.489237] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64732 terminated unexpected 00:12:19.882 ===================================================== 00:12:19.882 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:19.882 ===================================================== 00:12:19.882 Controller Capabilities/Features 00:12:19.882 ================================ 00:12:19.882 Vendor ID: 1b36 00:12:19.882 Subsystem Vendor ID: 1af4 00:12:19.882 Serial Number: 12340 00:12:19.882 Model Number: QEMU NVMe Ctrl 00:12:19.882 Firmware Version: 8.0.0 00:12:19.882 Recommended Arb Burst: 6 00:12:19.882 IEEE OUI Identifier: 00 54 52 00:12:19.882 Multi-path I/O 00:12:19.882 May have multiple subsystem ports: No 00:12:19.882 May have multiple controllers: No 00:12:19.882 Associated with SR-IOV VF: No 00:12:19.882 Max Data Transfer Size: 524288 00:12:19.882 Max Number of Namespaces: 256 00:12:19.882 Max Number of I/O Queues: 64 00:12:19.882 NVMe Specification Version (VS): 1.4 00:12:19.882 NVMe Specification Version (Identify): 1.4 00:12:19.882 Maximum Queue Entries: 2048 00:12:19.882 Contiguous Queues Required: Yes 00:12:19.882 Arbitration Mechanisms Supported 00:12:19.882 Weighted Round Robin: Not Supported 00:12:19.882 Vendor Specific: Not Supported 00:12:19.882 Reset Timeout: 7500 ms 00:12:19.882 Doorbell Stride: 4 bytes 00:12:19.882 NVM Subsystem Reset: Not Supported 00:12:19.882 Command Sets Supported 00:12:19.882 NVM Command Set: Supported 00:12:19.882 Boot Partition: Not Supported 00:12:19.882 Memory Page Size Minimum: 4096 bytes 00:12:19.882 Memory Page Size Maximum: 65536 bytes 00:12:19.882 Persistent Memory Region: Not Supported 00:12:19.882 Optional Asynchronous Events Supported 00:12:19.883 Namespace Attribute Notices: Supported 00:12:19.883 Firmware Activation Notices: Not Supported 00:12:19.883 ANA Change Notices: Not Supported 00:12:19.883 PLE Aggregate Log Change Notices: Not Supported 00:12:19.883 LBA Status Info Alert Notices: Not Supported 00:12:19.883 EGE Aggregate Log Change Notices: Not Supported 00:12:19.883 Normal NVM Subsystem Shutdown event: Not Supported 00:12:19.883 Zone Descriptor Change Notices: Not Supported 00:12:19.883 Discovery Log Change Notices: Not Supported 00:12:19.883 Controller Attributes 00:12:19.883 128-bit Host Identifier: Not Supported 00:12:19.883 Non-Operational Permissive Mode: Not Supported 00:12:19.883 NVM Sets: Not Supported 00:12:19.883 Read Recovery Levels: Not Supported 00:12:19.883 Endurance Groups: Not Supported 00:12:19.883 Predictable Latency Mode: Not Supported 00:12:19.883 Traffic Based Keep ALive: Not Supported 00:12:19.883 Namespace Granularity: Not Supported 00:12:19.883 SQ Associations: Not Supported 00:12:19.883 UUID List: Not Supported 00:12:19.883 Multi-Domain Subsystem: Not Supported 00:12:19.883 Fixed Capacity Management: Not Supported 00:12:19.883 Variable Capacity Management: Not Supported 00:12:19.883 Delete Endurance Group: Not Supported 00:12:19.883 Delete NVM Set: Not Supported 00:12:19.883 Extended LBA Formats Supported: Supported 00:12:19.883 Flexible Data Placement Supported: Not Supported 00:12:19.883 00:12:19.883 Controller Memory Buffer Support 00:12:19.883 ================================ 00:12:19.883 Supported: No 00:12:19.883 00:12:19.883 Persistent Memory Region Support 00:12:19.883 ================================ 00:12:19.883 Supported: No 00:12:19.883 00:12:19.883 Admin Command Set Attributes 00:12:19.883 ============================ 00:12:19.883 Security Send/Receive: Not Supported 00:12:19.883 Format NVM: Supported 00:12:19.883 Firmware Activate/Download: Not Supported 00:12:19.883 Namespace Management: Supported 00:12:19.883 Device Self-Test: Not Supported 00:12:19.883 Directives: Supported 00:12:19.883 NVMe-MI: Not Supported 00:12:19.883 Virtualization Management: Not Supported 00:12:19.883 Doorbell Buffer Config: Supported 00:12:19.883 Get LBA Status Capability: Not Supported 00:12:19.883 Command & Feature Lockdown Capability: Not Supported 00:12:19.883 Abort Command Limit: 4 00:12:19.883 Async Event Request Limit: 4 00:12:19.883 Number of Firmware Slots: N/A 00:12:19.883 Firmware Slot 1 Read-Only: N/A 00:12:19.883 Firmware Activation Without Reset: N/A 00:12:19.883 Multiple Update Detection Support: N/A 00:12:19.883 Firmware Update Granularity: No Information Provided 00:12:19.883 Per-Namespace SMART Log: Yes 00:12:19.883 Asymmetric Namespace Access Log Page: Not Supported 00:12:19.883 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:12:19.883 Command Effects Log Page: Supported 00:12:19.883 Get Log Page Extended Data: Supported 00:12:19.883 Telemetry Log Pages: Not Supported 00:12:19.883 Persistent Event Log Pages: Not Supported 00:12:19.883 Supported Log Pages Log Page: May Support 00:12:19.883 Commands Supported & Effects Log Page: Not Supported 00:12:19.883 Feature Identifiers & Effects Log Page:May Support 00:12:19.883 NVMe-MI Commands & Effects Log Page: May Support 00:12:19.883 Data Area 4 for Telemetry Log: Not Supported 00:12:19.883 Error Log Page Entries Supported: 1 00:12:19.883 Keep Alive: Not Supported 00:12:19.883 00:12:19.883 NVM Command Set Attributes 00:12:19.883 ========================== 00:12:19.883 Submission Queue Entry Size 00:12:19.883 Max: 64 00:12:19.883 Min: 64 00:12:19.883 Completion Queue Entry Size 00:12:19.883 Max: 16 00:12:19.883 Min: 16 00:12:19.883 Number of Namespaces: 256 00:12:19.883 Compare Command: Supported 00:12:19.883 Write Uncorrectable Command: Not Supported 00:12:19.883 Dataset Management Command: Supported 00:12:19.883 Write Zeroes Command: Supported 00:12:19.883 Set Features Save Field: Supported 00:12:19.883 Reservations: Not Supported 00:12:19.883 Timestamp: Supported 00:12:19.883 Copy: Supported 00:12:19.883 Volatile Write Cache: Present 00:12:19.883 Atomic Write Unit (Normal): 1 00:12:19.883 Atomic Write Unit (PFail): 1 00:12:19.883 Atomic Compare & Write Unit: 1 00:12:19.883 Fused Compare & Write: Not Supported 00:12:19.883 Scatter-Gather List 00:12:19.883 SGL Command Set: Supported 00:12:19.883 SGL Keyed: Not Supported 00:12:19.883 SGL Bit Bucket Descriptor: Not Supported 00:12:19.883 SGL Metadata Pointer: Not Supported 00:12:19.883 Oversized SGL: Not Supported 00:12:19.883 SGL Metadata Address: Not Supported 00:12:19.883 SGL Offset: Not Supported 00:12:19.883 Transport SGL Data Block: Not Supported 00:12:19.883 Replay Protected Memory Block: Not Supported 00:12:19.883 00:12:19.883 Firmware Slot Information 00:12:19.883 ========================= 00:12:19.883 Active slot: 1 00:12:19.883 Slot 1 Firmware Revision: 1.0 00:12:19.883 00:12:19.883 00:12:19.883 Commands Supported and Effects 00:12:19.883 ============================== 00:12:19.883 Admin Commands 00:12:19.883 -------------- 00:12:19.883 Delete I/O Submission Queue (00h): Supported 00:12:19.883 Create I/O Submission Queue (01h): Supported 00:12:19.883 Get Log Page (02h): Supported 00:12:19.883 Delete I/O Completion Queue (04h): Supported 00:12:19.883 Create I/O Completion Queue (05h): Supported 00:12:19.883 Identify (06h): Supported 00:12:19.883 Abort (08h): Supported 00:12:19.883 Set Features (09h): Supported 00:12:19.883 Get Features (0Ah): Supported 00:12:19.883 Asynchronous Event Request (0Ch): Supported 00:12:19.883 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:19.883 Directive Send (19h): Supported 00:12:19.883 Directive Receive (1Ah): Supported 00:12:19.883 Virtualization Management (1Ch): Supported 00:12:19.883 Doorbell Buffer Config (7Ch): Supported 00:12:19.883 Format NVM (80h): Supported LBA-Change 00:12:19.883 I/O Commands 00:12:19.883 ------------ 00:12:19.883 Flush (00h): Supported LBA-Change 00:12:19.883 Write (01h): Supported LBA-Change 00:12:19.883 Read (02h): Supported 00:12:19.883 Compare (05h): Supported 00:12:19.883 Write Zeroes (08h): Supported LBA-Change 00:12:19.883 Dataset Management (09h): Supported LBA-Change 00:12:19.883 Unknown (0Ch): Supported 00:12:19.883 Unknown (12h): Supported 00:12:19.883 Copy (19h): Supported LBA-Change 00:12:19.883 Unknown (1Dh): Supported LBA-Change 00:12:19.883 00:12:19.883 Error Log 00:12:19.883 ========= 00:12:19.883 00:12:19.883 Arbitration 00:12:19.883 =========== 00:12:19.883 Arbitration Burst: no limit 00:12:19.883 00:12:19.883 Power Management 00:12:19.883 ================ 00:12:19.883 Number of Power States: 1 00:12:19.883 Current Power State: Power State #0 00:12:19.883 Power State #0: 00:12:19.883 Max Power: 25.00 W 00:12:19.883 Non-Operational State: Operational 00:12:19.883 Entry Latency: 16 microseconds 00:12:19.883 Exit Latency: 4 microseconds 00:12:19.883 Relative Read Throughput: 0 00:12:19.883 Relative Read Latency: 0 00:12:19.883 Relative Write Throughput: 0 00:12:19.883 Relative Write Latency: 0 00:12:19.883 Idle Power[2024-11-20 11:26:25.490378] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64732 terminated unexpected 00:12:19.883 : Not Reported 00:12:19.883 Active Power: Not Reported 00:12:19.883 Non-Operational Permissive Mode: Not Supported 00:12:19.883 00:12:19.883 Health Information 00:12:19.883 ================== 00:12:19.883 Critical Warnings: 00:12:19.883 Available Spare Space: OK 00:12:19.883 Temperature: OK 00:12:19.883 Device Reliability: OK 00:12:19.883 Read Only: No 00:12:19.883 Volatile Memory Backup: OK 00:12:19.883 Current Temperature: 323 Kelvin (50 Celsius) 00:12:19.883 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:19.883 Available Spare: 0% 00:12:19.883 Available Spare Threshold: 0% 00:12:19.883 Life Percentage Used: 0% 00:12:19.883 Data Units Read: 667 00:12:19.883 Data Units Written: 595 00:12:19.883 Host Read Commands: 31987 00:12:19.883 Host Write Commands: 31773 00:12:19.883 Controller Busy Time: 0 minutes 00:12:19.883 Power Cycles: 0 00:12:19.883 Power On Hours: 0 hours 00:12:19.883 Unsafe Shutdowns: 0 00:12:19.883 Unrecoverable Media Errors: 0 00:12:19.883 Lifetime Error Log Entries: 0 00:12:19.883 Warning Temperature Time: 0 minutes 00:12:19.883 Critical Temperature Time: 0 minutes 00:12:19.883 00:12:19.883 Number of Queues 00:12:19.883 ================ 00:12:19.883 Number of I/O Submission Queues: 64 00:12:19.883 Number of I/O Completion Queues: 64 00:12:19.883 00:12:19.883 ZNS Specific Controller Data 00:12:19.883 ============================ 00:12:19.883 Zone Append Size Limit: 0 00:12:19.883 00:12:19.883 00:12:19.884 Active Namespaces 00:12:19.884 ================= 00:12:19.884 Namespace ID:1 00:12:19.884 Error Recovery Timeout: Unlimited 00:12:19.884 Command Set Identifier: NVM (00h) 00:12:19.884 Deallocate: Supported 00:12:19.884 Deallocated/Unwritten Error: Supported 00:12:19.884 Deallocated Read Value: All 0x00 00:12:19.884 Deallocate in Write Zeroes: Not Supported 00:12:19.884 Deallocated Guard Field: 0xFFFF 00:12:19.884 Flush: Supported 00:12:19.884 Reservation: Not Supported 00:12:19.884 Metadata Transferred as: Separate Metadata Buffer 00:12:19.884 Namespace Sharing Capabilities: Private 00:12:19.884 Size (in LBAs): 1548666 (5GiB) 00:12:19.884 Capacity (in LBAs): 1548666 (5GiB) 00:12:19.884 Utilization (in LBAs): 1548666 (5GiB) 00:12:19.884 Thin Provisioning: Not Supported 00:12:19.884 Per-NS Atomic Units: No 00:12:19.884 Maximum Single Source Range Length: 128 00:12:19.884 Maximum Copy Length: 128 00:12:19.884 Maximum Source Range Count: 128 00:12:19.884 NGUID/EUI64 Never Reused: No 00:12:19.884 Namespace Write Protected: No 00:12:19.884 Number of LBA Formats: 8 00:12:19.884 Current LBA Format: LBA Format #07 00:12:19.884 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:19.884 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:19.884 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:19.884 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:19.884 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:19.884 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:19.884 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:19.884 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:19.884 00:12:19.884 NVM Specific Namespace Data 00:12:19.884 =========================== 00:12:19.884 Logical Block Storage Tag Mask: 0 00:12:19.884 Protection Information Capabilities: 00:12:19.884 16b Guard Protection Information Storage Tag Support: No 00:12:19.884 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:19.884 Storage Tag Check Read Support: No 00:12:19.884 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.884 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.884 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.884 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.884 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.884 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.884 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.884 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.884 ===================================================== 00:12:19.884 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:19.884 ===================================================== 00:12:19.884 Controller Capabilities/Features 00:12:19.884 ================================ 00:12:19.884 Vendor ID: 1b36 00:12:19.884 Subsystem Vendor ID: 1af4 00:12:19.884 Serial Number: 12341 00:12:19.884 Model Number: QEMU NVMe Ctrl 00:12:19.884 Firmware Version: 8.0.0 00:12:19.884 Recommended Arb Burst: 6 00:12:19.884 IEEE OUI Identifier: 00 54 52 00:12:19.884 Multi-path I/O 00:12:19.884 May have multiple subsystem ports: No 00:12:19.884 May have multiple controllers: No 00:12:19.884 Associated with SR-IOV VF: No 00:12:19.884 Max Data Transfer Size: 524288 00:12:19.884 Max Number of Namespaces: 256 00:12:19.884 Max Number of I/O Queues: 64 00:12:19.884 NVMe Specification Version (VS): 1.4 00:12:19.884 NVMe Specification Version (Identify): 1.4 00:12:19.884 Maximum Queue Entries: 2048 00:12:19.884 Contiguous Queues Required: Yes 00:12:19.884 Arbitration Mechanisms Supported 00:12:19.884 Weighted Round Robin: Not Supported 00:12:19.884 Vendor Specific: Not Supported 00:12:19.884 Reset Timeout: 7500 ms 00:12:19.884 Doorbell Stride: 4 bytes 00:12:19.884 NVM Subsystem Reset: Not Supported 00:12:19.884 Command Sets Supported 00:12:19.884 NVM Command Set: Supported 00:12:19.884 Boot Partition: Not Supported 00:12:19.884 Memory Page Size Minimum: 4096 bytes 00:12:19.884 Memory Page Size Maximum: 65536 bytes 00:12:19.884 Persistent Memory Region: Not Supported 00:12:19.884 Optional Asynchronous Events Supported 00:12:19.884 Namespace Attribute Notices: Supported 00:12:19.884 Firmware Activation Notices: Not Supported 00:12:19.884 ANA Change Notices: Not Supported 00:12:19.884 PLE Aggregate Log Change Notices: Not Supported 00:12:19.884 LBA Status Info Alert Notices: Not Supported 00:12:19.884 EGE Aggregate Log Change Notices: Not Supported 00:12:19.884 Normal NVM Subsystem Shutdown event: Not Supported 00:12:19.884 Zone Descriptor Change Notices: Not Supported 00:12:19.884 Discovery Log Change Notices: Not Supported 00:12:19.884 Controller Attributes 00:12:19.884 128-bit Host Identifier: Not Supported 00:12:19.884 Non-Operational Permissive Mode: Not Supported 00:12:19.884 NVM Sets: Not Supported 00:12:19.884 Read Recovery Levels: Not Supported 00:12:19.884 Endurance Groups: Not Supported 00:12:19.884 Predictable Latency Mode: Not Supported 00:12:19.884 Traffic Based Keep ALive: Not Supported 00:12:19.884 Namespace Granularity: Not Supported 00:12:19.884 SQ Associations: Not Supported 00:12:19.884 UUID List: Not Supported 00:12:19.884 Multi-Domain Subsystem: Not Supported 00:12:19.884 Fixed Capacity Management: Not Supported 00:12:19.884 Variable Capacity Management: Not Supported 00:12:19.884 Delete Endurance Group: Not Supported 00:12:19.884 Delete NVM Set: Not Supported 00:12:19.884 Extended LBA Formats Supported: Supported 00:12:19.884 Flexible Data Placement Supported: Not Supported 00:12:19.884 00:12:19.884 Controller Memory Buffer Support 00:12:19.884 ================================ 00:12:19.884 Supported: No 00:12:19.884 00:12:19.884 Persistent Memory Region Support 00:12:19.884 ================================ 00:12:19.884 Supported: No 00:12:19.884 00:12:19.884 Admin Command Set Attributes 00:12:19.884 ============================ 00:12:19.884 Security Send/Receive: Not Supported 00:12:19.884 Format NVM: Supported 00:12:19.884 Firmware Activate/Download: Not Supported 00:12:19.884 Namespace Management: Supported 00:12:19.884 Device Self-Test: Not Supported 00:12:19.884 Directives: Supported 00:12:19.884 NVMe-MI: Not Supported 00:12:19.884 Virtualization Management: Not Supported 00:12:19.884 Doorbell Buffer Config: Supported 00:12:19.884 Get LBA Status Capability: Not Supported 00:12:19.884 Command & Feature Lockdown Capability: Not Supported 00:12:19.884 Abort Command Limit: 4 00:12:19.884 Async Event Request Limit: 4 00:12:19.884 Number of Firmware Slots: N/A 00:12:19.884 Firmware Slot 1 Read-Only: N/A 00:12:19.884 Firmware Activation Without Reset: N/A 00:12:19.884 Multiple Update Detection Support: N/A 00:12:19.884 Firmware Update Granularity: No Information Provided 00:12:19.884 Per-Namespace SMART Log: Yes 00:12:19.884 Asymmetric Namespace Access Log Page: Not Supported 00:12:19.884 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:12:19.884 Command Effects Log Page: Supported 00:12:19.884 Get Log Page Extended Data: Supported 00:12:19.884 Telemetry Log Pages: Not Supported 00:12:19.884 Persistent Event Log Pages: Not Supported 00:12:19.884 Supported Log Pages Log Page: May Support 00:12:19.884 Commands Supported & Effects Log Page: Not Supported 00:12:19.884 Feature Identifiers & Effects Log Page:May Support 00:12:19.884 NVMe-MI Commands & Effects Log Page: May Support 00:12:19.884 Data Area 4 for Telemetry Log: Not Supported 00:12:19.884 Error Log Page Entries Supported: 1 00:12:19.884 Keep Alive: Not Supported 00:12:19.884 00:12:19.884 NVM Command Set Attributes 00:12:19.884 ========================== 00:12:19.884 Submission Queue Entry Size 00:12:19.884 Max: 64 00:12:19.884 Min: 64 00:12:19.884 Completion Queue Entry Size 00:12:19.884 Max: 16 00:12:19.884 Min: 16 00:12:19.884 Number of Namespaces: 256 00:12:19.884 Compare Command: Supported 00:12:19.884 Write Uncorrectable Command: Not Supported 00:12:19.884 Dataset Management Command: Supported 00:12:19.884 Write Zeroes Command: Supported 00:12:19.884 Set Features Save Field: Supported 00:12:19.884 Reservations: Not Supported 00:12:19.884 Timestamp: Supported 00:12:19.884 Copy: Supported 00:12:19.884 Volatile Write Cache: Present 00:12:19.884 Atomic Write Unit (Normal): 1 00:12:19.884 Atomic Write Unit (PFail): 1 00:12:19.884 Atomic Compare & Write Unit: 1 00:12:19.884 Fused Compare & Write: Not Supported 00:12:19.884 Scatter-Gather List 00:12:19.884 SGL Command Set: Supported 00:12:19.884 SGL Keyed: Not Supported 00:12:19.884 SGL Bit Bucket Descriptor: Not Supported 00:12:19.885 SGL Metadata Pointer: Not Supported 00:12:19.885 Oversized SGL: Not Supported 00:12:19.885 SGL Metadata Address: Not Supported 00:12:19.885 SGL Offset: Not Supported 00:12:19.885 Transport SGL Data Block: Not Supported 00:12:19.885 Replay Protected Memory Block: Not Supported 00:12:19.885 00:12:19.885 Firmware Slot Information 00:12:19.885 ========================= 00:12:19.885 Active slot: 1 00:12:19.885 Slot 1 Firmware Revision: 1.0 00:12:19.885 00:12:19.885 00:12:19.885 Commands Supported and Effects 00:12:19.885 ============================== 00:12:19.885 Admin Commands 00:12:19.885 -------------- 00:12:19.885 Delete I/O Submission Queue (00h): Supported 00:12:19.885 Create I/O Submission Queue (01h): Supported 00:12:19.885 Get Log Page (02h): Supported 00:12:19.885 Delete I/O Completion Queue (04h): Supported 00:12:19.885 Create I/O Completion Queue (05h): Supported 00:12:19.885 Identify (06h): Supported 00:12:19.885 Abort (08h): Supported 00:12:19.885 Set Features (09h): Supported 00:12:19.885 Get Features (0Ah): Supported 00:12:19.885 Asynchronous Event Request (0Ch): Supported 00:12:19.885 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:19.885 Directive Send (19h): Supported 00:12:19.885 Directive Receive (1Ah): Supported 00:12:19.885 Virtualization Management (1Ch): Supported 00:12:19.885 Doorbell Buffer Config (7Ch): Supported 00:12:19.885 Format NVM (80h): Supported LBA-Change 00:12:19.885 I/O Commands 00:12:19.885 ------------ 00:12:19.885 Flush (00h): Supported LBA-Change 00:12:19.885 Write (01h): Supported LBA-Change 00:12:19.885 Read (02h): Supported 00:12:19.885 Compare (05h): Supported 00:12:19.885 Write Zeroes (08h): Supported LBA-Change 00:12:19.885 Dataset Management (09h): Supported LBA-Change 00:12:19.885 Unknown (0Ch): Supported 00:12:19.885 Unknown (12h): Supported 00:12:19.885 Copy (19h): Supported LBA-Change 00:12:19.885 Unknown (1Dh): Supported LBA-Change 00:12:19.885 00:12:19.885 Error Log 00:12:19.885 ========= 00:12:19.885 00:12:19.885 Arbitration 00:12:19.885 =========== 00:12:19.885 Arbitration Burst: no limit 00:12:19.885 00:12:19.885 Power Management 00:12:19.885 ================ 00:12:19.885 Number of Power States: 1 00:12:19.885 Current Power State: Power State #0 00:12:19.885 Power State #0: 00:12:19.885 Max Power: 25.00 W 00:12:19.885 Non-Operational State: Operational 00:12:19.885 Entry Latency: 16 microseconds 00:12:19.885 Exit Latency: 4 microseconds 00:12:19.885 Relative Read Throughput: 0 00:12:19.885 Relative Read Latency: 0 00:12:19.885 Relative Write Throughput: 0 00:12:19.885 Relative Write Latency: 0 00:12:19.885 Idle Power: Not Reported 00:12:19.885 Active Power: Not Reported 00:12:19.885 Non-Operational Permissive Mode: Not Supported 00:12:19.885 00:12:19.885 Health Information 00:12:19.885 ================== 00:12:19.885 Critical Warnings: 00:12:19.885 Available Spare Space: OK 00:12:19.885 Temperature: [2024-11-20 11:26:25.491104] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64732 terminated unexpected 00:12:19.885 OK 00:12:19.885 Device Reliability: OK 00:12:19.885 Read Only: No 00:12:19.885 Volatile Memory Backup: OK 00:12:19.885 Current Temperature: 323 Kelvin (50 Celsius) 00:12:19.885 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:19.885 Available Spare: 0% 00:12:19.885 Available Spare Threshold: 0% 00:12:19.885 Life Percentage Used: 0% 00:12:19.885 Data Units Read: 1003 00:12:19.885 Data Units Written: 870 00:12:19.885 Host Read Commands: 47730 00:12:19.885 Host Write Commands: 46524 00:12:19.885 Controller Busy Time: 0 minutes 00:12:19.885 Power Cycles: 0 00:12:19.885 Power On Hours: 0 hours 00:12:19.885 Unsafe Shutdowns: 0 00:12:19.885 Unrecoverable Media Errors: 0 00:12:19.885 Lifetime Error Log Entries: 0 00:12:19.885 Warning Temperature Time: 0 minutes 00:12:19.885 Critical Temperature Time: 0 minutes 00:12:19.885 00:12:19.885 Number of Queues 00:12:19.885 ================ 00:12:19.885 Number of I/O Submission Queues: 64 00:12:19.885 Number of I/O Completion Queues: 64 00:12:19.885 00:12:19.885 ZNS Specific Controller Data 00:12:19.885 ============================ 00:12:19.885 Zone Append Size Limit: 0 00:12:19.885 00:12:19.885 00:12:19.885 Active Namespaces 00:12:19.885 ================= 00:12:19.885 Namespace ID:1 00:12:19.885 Error Recovery Timeout: Unlimited 00:12:19.885 Command Set Identifier: NVM (00h) 00:12:19.885 Deallocate: Supported 00:12:19.885 Deallocated/Unwritten Error: Supported 00:12:19.885 Deallocated Read Value: All 0x00 00:12:19.885 Deallocate in Write Zeroes: Not Supported 00:12:19.885 Deallocated Guard Field: 0xFFFF 00:12:19.885 Flush: Supported 00:12:19.885 Reservation: Not Supported 00:12:19.885 Namespace Sharing Capabilities: Private 00:12:19.885 Size (in LBAs): 1310720 (5GiB) 00:12:19.885 Capacity (in LBAs): 1310720 (5GiB) 00:12:19.885 Utilization (in LBAs): 1310720 (5GiB) 00:12:19.885 Thin Provisioning: Not Supported 00:12:19.885 Per-NS Atomic Units: No 00:12:19.885 Maximum Single Source Range Length: 128 00:12:19.885 Maximum Copy Length: 128 00:12:19.885 Maximum Source Range Count: 128 00:12:19.885 NGUID/EUI64 Never Reused: No 00:12:19.885 Namespace Write Protected: No 00:12:19.885 Number of LBA Formats: 8 00:12:19.885 Current LBA Format: LBA Format #04 00:12:19.885 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:19.885 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:19.885 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:19.885 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:19.885 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:19.886 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:19.886 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:19.886 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:19.886 00:12:19.886 NVM Specific Namespace Data 00:12:19.886 =========================== 00:12:19.886 Logical Block Storage Tag Mask: 0 00:12:19.886 Protection Information Capabilities: 00:12:19.886 16b Guard Protection Information Storage Tag Support: No 00:12:19.886 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:19.886 Storage Tag Check Read Support: No 00:12:19.886 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.886 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.886 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.886 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.886 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.886 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.886 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.886 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.886 ===================================================== 00:12:19.886 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:19.886 ===================================================== 00:12:19.886 Controller Capabilities/Features 00:12:19.886 ================================ 00:12:19.886 Vendor ID: 1b36 00:12:19.886 Subsystem Vendor ID: 1af4 00:12:19.886 Serial Number: 12343 00:12:19.886 Model Number: QEMU NVMe Ctrl 00:12:19.886 Firmware Version: 8.0.0 00:12:19.886 Recommended Arb Burst: 6 00:12:19.886 IEEE OUI Identifier: 00 54 52 00:12:19.886 Multi-path I/O 00:12:19.886 May have multiple subsystem ports: No 00:12:19.886 May have multiple controllers: Yes 00:12:19.886 Associated with SR-IOV VF: No 00:12:19.886 Max Data Transfer Size: 524288 00:12:19.886 Max Number of Namespaces: 256 00:12:19.886 Max Number of I/O Queues: 64 00:12:19.886 NVMe Specification Version (VS): 1.4 00:12:19.886 NVMe Specification Version (Identify): 1.4 00:12:19.886 Maximum Queue Entries: 2048 00:12:19.886 Contiguous Queues Required: Yes 00:12:19.886 Arbitration Mechanisms Supported 00:12:19.886 Weighted Round Robin: Not Supported 00:12:19.886 Vendor Specific: Not Supported 00:12:19.886 Reset Timeout: 7500 ms 00:12:19.886 Doorbell Stride: 4 bytes 00:12:19.886 NVM Subsystem Reset: Not Supported 00:12:19.886 Command Sets Supported 00:12:19.886 NVM Command Set: Supported 00:12:19.886 Boot Partition: Not Supported 00:12:19.886 Memory Page Size Minimum: 4096 bytes 00:12:19.886 Memory Page Size Maximum: 65536 bytes 00:12:19.886 Persistent Memory Region: Not Supported 00:12:19.886 Optional Asynchronous Events Supported 00:12:19.886 Namespace Attribute Notices: Supported 00:12:19.886 Firmware Activation Notices: Not Supported 00:12:19.886 ANA Change Notices: Not Supported 00:12:19.886 PLE Aggregate Log Change Notices: Not Supported 00:12:19.886 LBA Status Info Alert Notices: Not Supported 00:12:19.886 EGE Aggregate Log Change Notices: Not Supported 00:12:19.886 Normal NVM Subsystem Shutdown event: Not Supported 00:12:19.886 Zone Descriptor Change Notices: Not Supported 00:12:19.886 Discovery Log Change Notices: Not Supported 00:12:19.886 Controller Attributes 00:12:19.886 128-bit Host Identifier: Not Supported 00:12:19.886 Non-Operational Permissive Mode: Not Supported 00:12:19.886 NVM Sets: Not Supported 00:12:19.886 Read Recovery Levels: Not Supported 00:12:19.886 Endurance Groups: Supported 00:12:19.886 Predictable Latency Mode: Not Supported 00:12:19.886 Traffic Based Keep ALive: Not Supported 00:12:19.886 Namespace Granularity: Not Supported 00:12:19.886 SQ Associations: Not Supported 00:12:19.886 UUID List: Not Supported 00:12:19.886 Multi-Domain Subsystem: Not Supported 00:12:19.886 Fixed Capacity Management: Not Supported 00:12:19.886 Variable Capacity Management: Not Supported 00:12:19.886 Delete Endurance Group: Not Supported 00:12:19.886 Delete NVM Set: Not Supported 00:12:19.886 Extended LBA Formats Supported: Supported 00:12:19.886 Flexible Data Placement Supported: Supported 00:12:19.886 00:12:19.886 Controller Memory Buffer Support 00:12:19.886 ================================ 00:12:19.886 Supported: No 00:12:19.886 00:12:19.886 Persistent Memory Region Support 00:12:19.886 ================================ 00:12:19.886 Supported: No 00:12:19.886 00:12:19.886 Admin Command Set Attributes 00:12:19.886 ============================ 00:12:19.886 Security Send/Receive: Not Supported 00:12:19.886 Format NVM: Supported 00:12:19.886 Firmware Activate/Download: Not Supported 00:12:19.886 Namespace Management: Supported 00:12:19.886 Device Self-Test: Not Supported 00:12:19.886 Directives: Supported 00:12:19.886 NVMe-MI: Not Supported 00:12:19.886 Virtualization Management: Not Supported 00:12:19.886 Doorbell Buffer Config: Supported 00:12:19.886 Get LBA Status Capability: Not Supported 00:12:19.886 Command & Feature Lockdown Capability: Not Supported 00:12:19.886 Abort Command Limit: 4 00:12:19.886 Async Event Request Limit: 4 00:12:19.886 Number of Firmware Slots: N/A 00:12:19.886 Firmware Slot 1 Read-Only: N/A 00:12:19.886 Firmware Activation Without Reset: N/A 00:12:19.886 Multiple Update Detection Support: N/A 00:12:19.886 Firmware Update Granularity: No Information Provided 00:12:19.886 Per-Namespace SMART Log: Yes 00:12:19.886 Asymmetric Namespace Access Log Page: Not Supported 00:12:19.886 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:12:19.886 Command Effects Log Page: Supported 00:12:19.886 Get Log Page Extended Data: Supported 00:12:19.886 Telemetry Log Pages: Not Supported 00:12:19.886 Persistent Event Log Pages: Not Supported 00:12:19.886 Supported Log Pages Log Page: May Support 00:12:19.886 Commands Supported & Effects Log Page: Not Supported 00:12:19.886 Feature Identifiers & Effects Log Page:May Support 00:12:19.886 NVMe-MI Commands & Effects Log Page: May Support 00:12:19.886 Data Area 4 for Telemetry Log: Not Supported 00:12:19.886 Error Log Page Entries Supported: 1 00:12:19.886 Keep Alive: Not Supported 00:12:19.886 00:12:19.886 NVM Command Set Attributes 00:12:19.886 ========================== 00:12:19.886 Submission Queue Entry Size 00:12:19.886 Max: 64 00:12:19.886 Min: 64 00:12:19.886 Completion Queue Entry Size 00:12:19.886 Max: 16 00:12:19.886 Min: 16 00:12:19.886 Number of Namespaces: 256 00:12:19.886 Compare Command: Supported 00:12:19.886 Write Uncorrectable Command: Not Supported 00:12:19.886 Dataset Management Command: Supported 00:12:19.886 Write Zeroes Command: Supported 00:12:19.886 Set Features Save Field: Supported 00:12:19.886 Reservations: Not Supported 00:12:19.886 Timestamp: Supported 00:12:19.886 Copy: Supported 00:12:19.886 Volatile Write Cache: Present 00:12:19.886 Atomic Write Unit (Normal): 1 00:12:19.886 Atomic Write Unit (PFail): 1 00:12:19.886 Atomic Compare & Write Unit: 1 00:12:19.886 Fused Compare & Write: Not Supported 00:12:19.886 Scatter-Gather List 00:12:19.886 SGL Command Set: Supported 00:12:19.886 SGL Keyed: Not Supported 00:12:19.886 SGL Bit Bucket Descriptor: Not Supported 00:12:19.886 SGL Metadata Pointer: Not Supported 00:12:19.886 Oversized SGL: Not Supported 00:12:19.886 SGL Metadata Address: Not Supported 00:12:19.886 SGL Offset: Not Supported 00:12:19.886 Transport SGL Data Block: Not Supported 00:12:19.886 Replay Protected Memory Block: Not Supported 00:12:19.886 00:12:19.886 Firmware Slot Information 00:12:19.886 ========================= 00:12:19.886 Active slot: 1 00:12:19.886 Slot 1 Firmware Revision: 1.0 00:12:19.886 00:12:19.886 00:12:19.886 Commands Supported and Effects 00:12:19.886 ============================== 00:12:19.886 Admin Commands 00:12:19.886 -------------- 00:12:19.886 Delete I/O Submission Queue (00h): Supported 00:12:19.886 Create I/O Submission Queue (01h): Supported 00:12:19.886 Get Log Page (02h): Supported 00:12:19.886 Delete I/O Completion Queue (04h): Supported 00:12:19.886 Create I/O Completion Queue (05h): Supported 00:12:19.886 Identify (06h): Supported 00:12:19.886 Abort (08h): Supported 00:12:19.886 Set Features (09h): Supported 00:12:19.886 Get Features (0Ah): Supported 00:12:19.886 Asynchronous Event Request (0Ch): Supported 00:12:19.886 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:19.886 Directive Send (19h): Supported 00:12:19.886 Directive Receive (1Ah): Supported 00:12:19.886 Virtualization Management (1Ch): Supported 00:12:19.886 Doorbell Buffer Config (7Ch): Supported 00:12:19.886 Format NVM (80h): Supported LBA-Change 00:12:19.886 I/O Commands 00:12:19.886 ------------ 00:12:19.886 Flush (00h): Supported LBA-Change 00:12:19.886 Write (01h): Supported LBA-Change 00:12:19.887 Read (02h): Supported 00:12:19.887 Compare (05h): Supported 00:12:19.887 Write Zeroes (08h): Supported LBA-Change 00:12:19.887 Dataset Management (09h): Supported LBA-Change 00:12:19.887 Unknown (0Ch): Supported 00:12:19.887 Unknown (12h): Supported 00:12:19.887 Copy (19h): Supported LBA-Change 00:12:19.887 Unknown (1Dh): Supported LBA-Change 00:12:19.887 00:12:19.887 Error Log 00:12:19.887 ========= 00:12:19.887 00:12:19.887 Arbitration 00:12:19.887 =========== 00:12:19.887 Arbitration Burst: no limit 00:12:19.887 00:12:19.887 Power Management 00:12:19.887 ================ 00:12:19.887 Number of Power States: 1 00:12:19.887 Current Power State: Power State #0 00:12:19.887 Power State #0: 00:12:19.887 Max Power: 25.00 W 00:12:19.887 Non-Operational State: Operational 00:12:19.887 Entry Latency: 16 microseconds 00:12:19.887 Exit Latency: 4 microseconds 00:12:19.887 Relative Read Throughput: 0 00:12:19.887 Relative Read Latency: 0 00:12:19.887 Relative Write Throughput: 0 00:12:19.887 Relative Write Latency: 0 00:12:19.887 Idle Power: Not Reported 00:12:19.887 Active Power: Not Reported 00:12:19.887 Non-Operational Permissive Mode: Not Supported 00:12:19.887 00:12:19.887 Health Information 00:12:19.887 ================== 00:12:19.887 Critical Warnings: 00:12:19.887 Available Spare Space: OK 00:12:19.887 Temperature: OK 00:12:19.887 Device Reliability: OK 00:12:19.887 Read Only: No 00:12:19.887 Volatile Memory Backup: OK 00:12:19.887 Current Temperature: 323 Kelvin (50 Celsius) 00:12:19.887 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:19.887 Available Spare: 0% 00:12:19.887 Available Spare Threshold: 0% 00:12:19.887 Life Percentage Used: 0% 00:12:19.887 Data Units Read: 740 00:12:19.887 Data Units Written: 669 00:12:19.887 Host Read Commands: 32921 00:12:19.887 Host Write Commands: 32344 00:12:19.887 Controller Busy Time: 0 minutes 00:12:19.887 Power Cycles: 0 00:12:19.887 Power On Hours: 0 hours 00:12:19.887 Unsafe Shutdowns: 0 00:12:19.887 Unrecoverable Media Errors: 0 00:12:19.887 Lifetime Error Log Entries: 0 00:12:19.887 Warning Temperature Time: 0 minutes 00:12:19.887 Critical Temperature Time: 0 minutes 00:12:19.887 00:12:19.887 Number of Queues 00:12:19.887 ================ 00:12:19.887 Number of I/O Submission Queues: 64 00:12:19.887 Number of I/O Completion Queues: 64 00:12:19.887 00:12:19.887 ZNS Specific Controller Data 00:12:19.887 ============================ 00:12:19.887 Zone Append Size Limit: 0 00:12:19.887 00:12:19.887 00:12:19.887 Active Namespaces 00:12:19.887 ================= 00:12:19.887 Namespace ID:1 00:12:19.887 Error Recovery Timeout: Unlimited 00:12:19.887 Command Set Identifier: NVM (00h) 00:12:19.887 Deallocate: Supported 00:12:19.887 Deallocated/Unwritten Error: Supported 00:12:19.887 Deallocated Read Value: All 0x00 00:12:19.887 Deallocate in Write Zeroes: Not Supported 00:12:19.887 Deallocated Guard Field: 0xFFFF 00:12:19.887 Flush: Supported 00:12:19.887 Reservation: Not Supported 00:12:19.887 Namespace Sharing Capabilities: Multiple Controllers 00:12:19.887 Size (in LBAs): 262144 (1GiB) 00:12:19.887 Capacity (in LBAs): 262144 (1GiB) 00:12:19.887 Utilization (in LBAs): 262144 (1GiB) 00:12:19.887 Thin Provisioning: Not Supported 00:12:19.887 Per-NS Atomic Units: No 00:12:19.887 Maximum Single Source Range Length: 128 00:12:19.887 Maximum Copy Length: 128 00:12:19.887 Maximum Source Range Count: 128 00:12:19.887 NGUID/EUI64 Never Reused: No 00:12:19.887 Namespace Write Protected: No 00:12:19.887 Endurance group ID: 1 00:12:19.887 Number of LBA Formats: 8 00:12:19.887 Current LBA Format: LBA Format #04 00:12:19.887 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:19.887 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:19.887 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:19.887 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:19.887 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:19.887 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:19.887 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:19.887 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:19.887 00:12:19.887 Get Feature FDP: 00:12:19.887 ================ 00:12:19.887 Enabled: Yes 00:12:19.887 FDP configuration index: 0 00:12:19.887 00:12:19.887 FDP configurations log page 00:12:19.887 =========================== 00:12:19.887 Number of FDP configurations: 1 00:12:19.887 Version: 0 00:12:19.887 Size: 112 00:12:19.887 FDP Configuration Descriptor: 0 00:12:19.887 Descriptor Size: 96 00:12:19.887 Reclaim Group Identifier format: 2 00:12:19.887 FDP Volatile Write Cache: Not Present 00:12:19.887 FDP Configuration: Valid 00:12:19.887 Vendor Specific Size: 0 00:12:19.887 Number of Reclaim Groups: 2 00:12:19.887 Number of Recalim Unit Handles: 8 00:12:19.887 Max Placement Identifiers: 128 00:12:19.887 Number of Namespaces Suppprted: 256 00:12:19.887 Reclaim unit Nominal Size: 6000000 bytes 00:12:19.887 Estimated Reclaim Unit Time Limit: Not Reported 00:12:19.887 RUH Desc #000: RUH Type: Initially Isolated 00:12:19.887 RUH Desc #001: RUH Type: Initially Isolated 00:12:19.887 RUH Desc #002: RUH Type: Initially Isolated 00:12:19.887 RUH Desc #003: RUH Type: Initially Isolated 00:12:19.887 RUH Desc #004: RUH Type: Initially Isolated 00:12:19.887 RUH Desc #005: RUH Type: Initially Isolated 00:12:19.887 RUH Desc #006: RUH Type: Initially Isolated 00:12:19.887 RUH Desc #007: RUH Type: Initially Isolated 00:12:19.887 00:12:19.887 FDP reclaim unit handle usage log page 00:12:19.887 ====================================== 00:12:19.887 Number of Reclaim Unit Handles: 8 00:12:19.887 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:19.887 RUH Usage Desc #001: RUH Attributes: Unused 00:12:19.887 RUH Usage Desc #002: RUH Attributes: Unused 00:12:19.887 RUH Usage Desc #003: RUH Attributes: Unused 00:12:19.887 RUH Usage Desc #004: RUH Attributes: Unused 00:12:19.887 RUH Usage Desc #005: RUH Attributes: Unused 00:12:19.887 RUH Usage Desc #006: RUH Attributes: Unused 00:12:19.887 RUH Usage Desc #007: RUH Attributes: Unused 00:12:19.887 00:12:19.887 FDP statistics log page 00:12:19.887 ======================= 00:12:19.887 Host bytes with metadata written: 419274752 00:12:19.887 Medi[2024-11-20 11:26:25.492336] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64732 terminated unexpected 00:12:19.887 a bytes with metadata written: 419319808 00:12:19.887 Media bytes erased: 0 00:12:19.887 00:12:19.887 FDP events log page 00:12:19.887 =================== 00:12:19.887 Number of FDP events: 0 00:12:19.887 00:12:19.887 NVM Specific Namespace Data 00:12:19.887 =========================== 00:12:19.887 Logical Block Storage Tag Mask: 0 00:12:19.887 Protection Information Capabilities: 00:12:19.887 16b Guard Protection Information Storage Tag Support: No 00:12:19.887 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:19.887 Storage Tag Check Read Support: No 00:12:19.887 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.887 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.887 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.887 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.887 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.887 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.887 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.887 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.887 ===================================================== 00:12:19.887 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:19.887 ===================================================== 00:12:19.887 Controller Capabilities/Features 00:12:19.887 ================================ 00:12:19.887 Vendor ID: 1b36 00:12:19.887 Subsystem Vendor ID: 1af4 00:12:19.887 Serial Number: 12342 00:12:19.887 Model Number: QEMU NVMe Ctrl 00:12:19.887 Firmware Version: 8.0.0 00:12:19.887 Recommended Arb Burst: 6 00:12:19.887 IEEE OUI Identifier: 00 54 52 00:12:19.887 Multi-path I/O 00:12:19.887 May have multiple subsystem ports: No 00:12:19.887 May have multiple controllers: No 00:12:19.887 Associated with SR-IOV VF: No 00:12:19.887 Max Data Transfer Size: 524288 00:12:19.887 Max Number of Namespaces: 256 00:12:19.887 Max Number of I/O Queues: 64 00:12:19.887 NVMe Specification Version (VS): 1.4 00:12:19.887 NVMe Specification Version (Identify): 1.4 00:12:19.887 Maximum Queue Entries: 2048 00:12:19.887 Contiguous Queues Required: Yes 00:12:19.887 Arbitration Mechanisms Supported 00:12:19.888 Weighted Round Robin: Not Supported 00:12:19.888 Vendor Specific: Not Supported 00:12:19.888 Reset Timeout: 7500 ms 00:12:19.888 Doorbell Stride: 4 bytes 00:12:19.888 NVM Subsystem Reset: Not Supported 00:12:19.888 Command Sets Supported 00:12:19.888 NVM Command Set: Supported 00:12:19.888 Boot Partition: Not Supported 00:12:19.888 Memory Page Size Minimum: 4096 bytes 00:12:19.888 Memory Page Size Maximum: 65536 bytes 00:12:19.888 Persistent Memory Region: Not Supported 00:12:19.888 Optional Asynchronous Events Supported 00:12:19.888 Namespace Attribute Notices: Supported 00:12:19.888 Firmware Activation Notices: Not Supported 00:12:19.888 ANA Change Notices: Not Supported 00:12:19.888 PLE Aggregate Log Change Notices: Not Supported 00:12:19.888 LBA Status Info Alert Notices: Not Supported 00:12:19.888 EGE Aggregate Log Change Notices: Not Supported 00:12:19.888 Normal NVM Subsystem Shutdown event: Not Supported 00:12:19.888 Zone Descriptor Change Notices: Not Supported 00:12:19.888 Discovery Log Change Notices: Not Supported 00:12:19.888 Controller Attributes 00:12:19.888 128-bit Host Identifier: Not Supported 00:12:19.888 Non-Operational Permissive Mode: Not Supported 00:12:19.888 NVM Sets: Not Supported 00:12:19.888 Read Recovery Levels: Not Supported 00:12:19.888 Endurance Groups: Not Supported 00:12:19.888 Predictable Latency Mode: Not Supported 00:12:19.888 Traffic Based Keep ALive: Not Supported 00:12:19.888 Namespace Granularity: Not Supported 00:12:19.888 SQ Associations: Not Supported 00:12:19.888 UUID List: Not Supported 00:12:19.888 Multi-Domain Subsystem: Not Supported 00:12:19.888 Fixed Capacity Management: Not Supported 00:12:19.888 Variable Capacity Management: Not Supported 00:12:19.888 Delete Endurance Group: Not Supported 00:12:19.888 Delete NVM Set: Not Supported 00:12:19.888 Extended LBA Formats Supported: Supported 00:12:19.888 Flexible Data Placement Supported: Not Supported 00:12:19.888 00:12:19.888 Controller Memory Buffer Support 00:12:19.888 ================================ 00:12:19.888 Supported: No 00:12:19.888 00:12:19.888 Persistent Memory Region Support 00:12:19.888 ================================ 00:12:19.888 Supported: No 00:12:19.888 00:12:19.888 Admin Command Set Attributes 00:12:19.888 ============================ 00:12:19.888 Security Send/Receive: Not Supported 00:12:19.888 Format NVM: Supported 00:12:19.888 Firmware Activate/Download: Not Supported 00:12:19.888 Namespace Management: Supported 00:12:19.888 Device Self-Test: Not Supported 00:12:19.888 Directives: Supported 00:12:19.888 NVMe-MI: Not Supported 00:12:19.888 Virtualization Management: Not Supported 00:12:19.888 Doorbell Buffer Config: Supported 00:12:19.888 Get LBA Status Capability: Not Supported 00:12:19.888 Command & Feature Lockdown Capability: Not Supported 00:12:19.888 Abort Command Limit: 4 00:12:19.888 Async Event Request Limit: 4 00:12:19.888 Number of Firmware Slots: N/A 00:12:19.888 Firmware Slot 1 Read-Only: N/A 00:12:19.888 Firmware Activation Without Reset: N/A 00:12:19.888 Multiple Update Detection Support: N/A 00:12:19.888 Firmware Update Granularity: No Information Provided 00:12:19.888 Per-Namespace SMART Log: Yes 00:12:19.888 Asymmetric Namespace Access Log Page: Not Supported 00:12:19.888 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:12:19.888 Command Effects Log Page: Supported 00:12:19.888 Get Log Page Extended Data: Supported 00:12:19.888 Telemetry Log Pages: Not Supported 00:12:19.888 Persistent Event Log Pages: Not Supported 00:12:19.888 Supported Log Pages Log Page: May Support 00:12:19.888 Commands Supported & Effects Log Page: Not Supported 00:12:19.888 Feature Identifiers & Effects Log Page:May Support 00:12:19.888 NVMe-MI Commands & Effects Log Page: May Support 00:12:19.888 Data Area 4 for Telemetry Log: Not Supported 00:12:19.888 Error Log Page Entries Supported: 1 00:12:19.888 Keep Alive: Not Supported 00:12:19.888 00:12:19.888 NVM Command Set Attributes 00:12:19.888 ========================== 00:12:19.888 Submission Queue Entry Size 00:12:19.888 Max: 64 00:12:19.888 Min: 64 00:12:19.888 Completion Queue Entry Size 00:12:19.888 Max: 16 00:12:19.888 Min: 16 00:12:19.888 Number of Namespaces: 256 00:12:19.888 Compare Command: Supported 00:12:19.888 Write Uncorrectable Command: Not Supported 00:12:19.888 Dataset Management Command: Supported 00:12:19.888 Write Zeroes Command: Supported 00:12:19.888 Set Features Save Field: Supported 00:12:19.888 Reservations: Not Supported 00:12:19.888 Timestamp: Supported 00:12:19.888 Copy: Supported 00:12:19.888 Volatile Write Cache: Present 00:12:19.888 Atomic Write Unit (Normal): 1 00:12:19.888 Atomic Write Unit (PFail): 1 00:12:19.888 Atomic Compare & Write Unit: 1 00:12:19.888 Fused Compare & Write: Not Supported 00:12:19.888 Scatter-Gather List 00:12:19.888 SGL Command Set: Supported 00:12:19.888 SGL Keyed: Not Supported 00:12:19.888 SGL Bit Bucket Descriptor: Not Supported 00:12:19.888 SGL Metadata Pointer: Not Supported 00:12:19.888 Oversized SGL: Not Supported 00:12:19.888 SGL Metadata Address: Not Supported 00:12:19.888 SGL Offset: Not Supported 00:12:19.888 Transport SGL Data Block: Not Supported 00:12:19.888 Replay Protected Memory Block: Not Supported 00:12:19.888 00:12:19.888 Firmware Slot Information 00:12:19.888 ========================= 00:12:19.888 Active slot: 1 00:12:19.888 Slot 1 Firmware Revision: 1.0 00:12:19.888 00:12:19.888 00:12:19.888 Commands Supported and Effects 00:12:19.888 ============================== 00:12:19.888 Admin Commands 00:12:19.888 -------------- 00:12:19.888 Delete I/O Submission Queue (00h): Supported 00:12:19.888 Create I/O Submission Queue (01h): Supported 00:12:19.888 Get Log Page (02h): Supported 00:12:19.888 Delete I/O Completion Queue (04h): Supported 00:12:19.888 Create I/O Completion Queue (05h): Supported 00:12:19.888 Identify (06h): Supported 00:12:19.888 Abort (08h): Supported 00:12:19.888 Set Features (09h): Supported 00:12:19.888 Get Features (0Ah): Supported 00:12:19.888 Asynchronous Event Request (0Ch): Supported 00:12:19.888 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:19.888 Directive Send (19h): Supported 00:12:19.888 Directive Receive (1Ah): Supported 00:12:19.888 Virtualization Management (1Ch): Supported 00:12:19.888 Doorbell Buffer Config (7Ch): Supported 00:12:19.888 Format NVM (80h): Supported LBA-Change 00:12:19.888 I/O Commands 00:12:19.888 ------------ 00:12:19.888 Flush (00h): Supported LBA-Change 00:12:19.888 Write (01h): Supported LBA-Change 00:12:19.888 Read (02h): Supported 00:12:19.888 Compare (05h): Supported 00:12:19.888 Write Zeroes (08h): Supported LBA-Change 00:12:19.888 Dataset Management (09h): Supported LBA-Change 00:12:19.888 Unknown (0Ch): Supported 00:12:19.888 Unknown (12h): Supported 00:12:19.888 Copy (19h): Supported LBA-Change 00:12:19.888 Unknown (1Dh): Supported LBA-Change 00:12:19.888 00:12:19.888 Error Log 00:12:19.888 ========= 00:12:19.888 00:12:19.888 Arbitration 00:12:19.888 =========== 00:12:19.888 Arbitration Burst: no limit 00:12:19.888 00:12:19.888 Power Management 00:12:19.888 ================ 00:12:19.888 Number of Power States: 1 00:12:19.888 Current Power State: Power State #0 00:12:19.888 Power State #0: 00:12:19.889 Max Power: 25.00 W 00:12:19.889 Non-Operational State: Operational 00:12:19.889 Entry Latency: 16 microseconds 00:12:19.889 Exit Latency: 4 microseconds 00:12:19.889 Relative Read Throughput: 0 00:12:19.889 Relative Read Latency: 0 00:12:19.889 Relative Write Throughput: 0 00:12:19.889 Relative Write Latency: 0 00:12:19.889 Idle Power: Not Reported 00:12:19.889 Active Power: Not Reported 00:12:19.889 Non-Operational Permissive Mode: Not Supported 00:12:19.889 00:12:19.889 Health Information 00:12:19.889 ================== 00:12:19.889 Critical Warnings: 00:12:19.889 Available Spare Space: OK 00:12:19.889 Temperature: OK 00:12:19.889 Device Reliability: OK 00:12:19.889 Read Only: No 00:12:19.889 Volatile Memory Backup: OK 00:12:19.889 Current Temperature: 323 Kelvin (50 Celsius) 00:12:19.889 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:19.889 Available Spare: 0% 00:12:19.889 Available Spare Threshold: 0% 00:12:19.889 Life Percentage Used: 0% 00:12:19.889 Data Units Read: 2124 00:12:19.889 Data Units Written: 1911 00:12:19.889 Host Read Commands: 97415 00:12:19.889 Host Write Commands: 95684 00:12:19.889 Controller Busy Time: 0 minutes 00:12:19.889 Power Cycles: 0 00:12:19.889 Power On Hours: 0 hours 00:12:19.889 Unsafe Shutdowns: 0 00:12:19.889 Unrecoverable Media Errors: 0 00:12:19.889 Lifetime Error Log Entries: 0 00:12:19.889 Warning Temperature Time: 0 minutes 00:12:19.889 Critical Temperature Time: 0 minutes 00:12:19.889 00:12:19.889 Number of Queues 00:12:19.889 ================ 00:12:19.889 Number of I/O Submission Queues: 64 00:12:19.889 Number of I/O Completion Queues: 64 00:12:19.889 00:12:19.889 ZNS Specific Controller Data 00:12:19.889 ============================ 00:12:19.889 Zone Append Size Limit: 0 00:12:19.889 00:12:19.889 00:12:19.889 Active Namespaces 00:12:19.889 ================= 00:12:19.889 Namespace ID:1 00:12:19.889 Error Recovery Timeout: Unlimited 00:12:19.889 Command Set Identifier: NVM (00h) 00:12:19.889 Deallocate: Supported 00:12:19.889 Deallocated/Unwritten Error: Supported 00:12:19.889 Deallocated Read Value: All 0x00 00:12:19.889 Deallocate in Write Zeroes: Not Supported 00:12:19.889 Deallocated Guard Field: 0xFFFF 00:12:19.889 Flush: Supported 00:12:19.889 Reservation: Not Supported 00:12:19.889 Namespace Sharing Capabilities: Private 00:12:19.889 Size (in LBAs): 1048576 (4GiB) 00:12:19.889 Capacity (in LBAs): 1048576 (4GiB) 00:12:19.889 Utilization (in LBAs): 1048576 (4GiB) 00:12:19.889 Thin Provisioning: Not Supported 00:12:19.889 Per-NS Atomic Units: No 00:12:19.889 Maximum Single Source Range Length: 128 00:12:19.889 Maximum Copy Length: 128 00:12:19.889 Maximum Source Range Count: 128 00:12:19.889 NGUID/EUI64 Never Reused: No 00:12:19.889 Namespace Write Protected: No 00:12:19.889 Number of LBA Formats: 8 00:12:19.889 Current LBA Format: LBA Format #04 00:12:19.889 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:19.889 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:19.889 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:19.889 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:19.889 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:19.889 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:19.889 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:19.889 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:19.889 00:12:19.889 NVM Specific Namespace Data 00:12:19.889 =========================== 00:12:19.889 Logical Block Storage Tag Mask: 0 00:12:19.889 Protection Information Capabilities: 00:12:19.889 16b Guard Protection Information Storage Tag Support: No 00:12:19.889 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:19.889 Storage Tag Check Read Support: No 00:12:19.889 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.889 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.889 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.889 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.889 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.889 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.889 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.889 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.889 Namespace ID:2 00:12:19.889 Error Recovery Timeout: Unlimited 00:12:19.889 Command Set Identifier: NVM (00h) 00:12:19.889 Deallocate: Supported 00:12:19.889 Deallocated/Unwritten Error: Supported 00:12:19.889 Deallocated Read Value: All 0x00 00:12:19.889 Deallocate in Write Zeroes: Not Supported 00:12:19.889 Deallocated Guard Field: 0xFFFF 00:12:19.889 Flush: Supported 00:12:19.889 Reservation: Not Supported 00:12:19.889 Namespace Sharing Capabilities: Private 00:12:19.889 Size (in LBAs): 1048576 (4GiB) 00:12:19.889 Capacity (in LBAs): 1048576 (4GiB) 00:12:19.889 Utilization (in LBAs): 1048576 (4GiB) 00:12:19.889 Thin Provisioning: Not Supported 00:12:19.889 Per-NS Atomic Units: No 00:12:19.889 Maximum Single Source Range Length: 128 00:12:19.889 Maximum Copy Length: 128 00:12:19.889 Maximum Source Range Count: 128 00:12:19.889 NGUID/EUI64 Never Reused: No 00:12:19.889 Namespace Write Protected: No 00:12:19.889 Number of LBA Formats: 8 00:12:19.889 Current LBA Format: LBA Format #04 00:12:19.889 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:19.889 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:19.889 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:19.889 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:19.889 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:19.889 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:19.889 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:19.889 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:19.889 00:12:19.889 NVM Specific Namespace Data 00:12:19.889 =========================== 00:12:19.889 Logical Block Storage Tag Mask: 0 00:12:19.889 Protection Information Capabilities: 00:12:19.889 16b Guard Protection Information Storage Tag Support: No 00:12:19.889 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:19.889 Storage Tag Check Read Support: No 00:12:19.889 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.889 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.889 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.889 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.889 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.889 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.889 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.889 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.889 Namespace ID:3 00:12:19.889 Error Recovery Timeout: Unlimited 00:12:19.889 Command Set Identifier: NVM (00h) 00:12:19.889 Deallocate: Supported 00:12:19.889 Deallocated/Unwritten Error: Supported 00:12:19.889 Deallocated Read Value: All 0x00 00:12:19.889 Deallocate in Write Zeroes: Not Supported 00:12:19.889 Deallocated Guard Field: 0xFFFF 00:12:19.889 Flush: Supported 00:12:19.889 Reservation: Not Supported 00:12:19.889 Namespace Sharing Capabilities: Private 00:12:19.889 Size (in LBAs): 1048576 (4GiB) 00:12:19.889 Capacity (in LBAs): 1048576 (4GiB) 00:12:19.889 Utilization (in LBAs): 1048576 (4GiB) 00:12:19.889 Thin Provisioning: Not Supported 00:12:19.889 Per-NS Atomic Units: No 00:12:19.889 Maximum Single Source Range Length: 128 00:12:19.889 Maximum Copy Length: 128 00:12:19.889 Maximum Source Range Count: 128 00:12:19.889 NGUID/EUI64 Never Reused: No 00:12:19.889 Namespace Write Protected: No 00:12:19.890 Number of LBA Formats: 8 00:12:19.890 Current LBA Format: LBA Format #04 00:12:19.890 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:19.890 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:19.890 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:19.890 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:19.890 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:19.890 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:19.890 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:19.890 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:19.890 00:12:19.890 NVM Specific Namespace Data 00:12:19.890 =========================== 00:12:19.890 Logical Block Storage Tag Mask: 0 00:12:19.890 Protection Information Capabilities: 00:12:19.890 16b Guard Protection Information Storage Tag Support: No 00:12:19.890 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:19.890 Storage Tag Check Read Support: No 00:12:19.890 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.890 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.890 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.890 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.890 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.890 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.890 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.890 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:19.890 11:26:25 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:19.890 11:26:25 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:12:20.149 ===================================================== 00:12:20.149 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:20.149 ===================================================== 00:12:20.149 Controller Capabilities/Features 00:12:20.149 ================================ 00:12:20.149 Vendor ID: 1b36 00:12:20.149 Subsystem Vendor ID: 1af4 00:12:20.149 Serial Number: 12340 00:12:20.149 Model Number: QEMU NVMe Ctrl 00:12:20.149 Firmware Version: 8.0.0 00:12:20.149 Recommended Arb Burst: 6 00:12:20.149 IEEE OUI Identifier: 00 54 52 00:12:20.149 Multi-path I/O 00:12:20.149 May have multiple subsystem ports: No 00:12:20.149 May have multiple controllers: No 00:12:20.149 Associated with SR-IOV VF: No 00:12:20.149 Max Data Transfer Size: 524288 00:12:20.149 Max Number of Namespaces: 256 00:12:20.149 Max Number of I/O Queues: 64 00:12:20.149 NVMe Specification Version (VS): 1.4 00:12:20.149 NVMe Specification Version (Identify): 1.4 00:12:20.149 Maximum Queue Entries: 2048 00:12:20.149 Contiguous Queues Required: Yes 00:12:20.149 Arbitration Mechanisms Supported 00:12:20.149 Weighted Round Robin: Not Supported 00:12:20.149 Vendor Specific: Not Supported 00:12:20.149 Reset Timeout: 7500 ms 00:12:20.149 Doorbell Stride: 4 bytes 00:12:20.149 NVM Subsystem Reset: Not Supported 00:12:20.149 Command Sets Supported 00:12:20.149 NVM Command Set: Supported 00:12:20.149 Boot Partition: Not Supported 00:12:20.149 Memory Page Size Minimum: 4096 bytes 00:12:20.149 Memory Page Size Maximum: 65536 bytes 00:12:20.149 Persistent Memory Region: Not Supported 00:12:20.149 Optional Asynchronous Events Supported 00:12:20.149 Namespace Attribute Notices: Supported 00:12:20.149 Firmware Activation Notices: Not Supported 00:12:20.149 ANA Change Notices: Not Supported 00:12:20.149 PLE Aggregate Log Change Notices: Not Supported 00:12:20.149 LBA Status Info Alert Notices: Not Supported 00:12:20.149 EGE Aggregate Log Change Notices: Not Supported 00:12:20.149 Normal NVM Subsystem Shutdown event: Not Supported 00:12:20.149 Zone Descriptor Change Notices: Not Supported 00:12:20.149 Discovery Log Change Notices: Not Supported 00:12:20.149 Controller Attributes 00:12:20.149 128-bit Host Identifier: Not Supported 00:12:20.149 Non-Operational Permissive Mode: Not Supported 00:12:20.149 NVM Sets: Not Supported 00:12:20.149 Read Recovery Levels: Not Supported 00:12:20.149 Endurance Groups: Not Supported 00:12:20.149 Predictable Latency Mode: Not Supported 00:12:20.149 Traffic Based Keep ALive: Not Supported 00:12:20.149 Namespace Granularity: Not Supported 00:12:20.149 SQ Associations: Not Supported 00:12:20.149 UUID List: Not Supported 00:12:20.149 Multi-Domain Subsystem: Not Supported 00:12:20.149 Fixed Capacity Management: Not Supported 00:12:20.149 Variable Capacity Management: Not Supported 00:12:20.149 Delete Endurance Group: Not Supported 00:12:20.149 Delete NVM Set: Not Supported 00:12:20.149 Extended LBA Formats Supported: Supported 00:12:20.149 Flexible Data Placement Supported: Not Supported 00:12:20.149 00:12:20.149 Controller Memory Buffer Support 00:12:20.149 ================================ 00:12:20.149 Supported: No 00:12:20.149 00:12:20.149 Persistent Memory Region Support 00:12:20.149 ================================ 00:12:20.149 Supported: No 00:12:20.149 00:12:20.149 Admin Command Set Attributes 00:12:20.149 ============================ 00:12:20.149 Security Send/Receive: Not Supported 00:12:20.149 Format NVM: Supported 00:12:20.149 Firmware Activate/Download: Not Supported 00:12:20.149 Namespace Management: Supported 00:12:20.149 Device Self-Test: Not Supported 00:12:20.149 Directives: Supported 00:12:20.149 NVMe-MI: Not Supported 00:12:20.149 Virtualization Management: Not Supported 00:12:20.149 Doorbell Buffer Config: Supported 00:12:20.149 Get LBA Status Capability: Not Supported 00:12:20.149 Command & Feature Lockdown Capability: Not Supported 00:12:20.149 Abort Command Limit: 4 00:12:20.149 Async Event Request Limit: 4 00:12:20.149 Number of Firmware Slots: N/A 00:12:20.149 Firmware Slot 1 Read-Only: N/A 00:12:20.149 Firmware Activation Without Reset: N/A 00:12:20.149 Multiple Update Detection Support: N/A 00:12:20.149 Firmware Update Granularity: No Information Provided 00:12:20.149 Per-Namespace SMART Log: Yes 00:12:20.149 Asymmetric Namespace Access Log Page: Not Supported 00:12:20.149 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:12:20.149 Command Effects Log Page: Supported 00:12:20.149 Get Log Page Extended Data: Supported 00:12:20.149 Telemetry Log Pages: Not Supported 00:12:20.149 Persistent Event Log Pages: Not Supported 00:12:20.149 Supported Log Pages Log Page: May Support 00:12:20.149 Commands Supported & Effects Log Page: Not Supported 00:12:20.149 Feature Identifiers & Effects Log Page:May Support 00:12:20.149 NVMe-MI Commands & Effects Log Page: May Support 00:12:20.149 Data Area 4 for Telemetry Log: Not Supported 00:12:20.149 Error Log Page Entries Supported: 1 00:12:20.149 Keep Alive: Not Supported 00:12:20.149 00:12:20.149 NVM Command Set Attributes 00:12:20.149 ========================== 00:12:20.149 Submission Queue Entry Size 00:12:20.149 Max: 64 00:12:20.149 Min: 64 00:12:20.149 Completion Queue Entry Size 00:12:20.149 Max: 16 00:12:20.149 Min: 16 00:12:20.149 Number of Namespaces: 256 00:12:20.149 Compare Command: Supported 00:12:20.149 Write Uncorrectable Command: Not Supported 00:12:20.149 Dataset Management Command: Supported 00:12:20.149 Write Zeroes Command: Supported 00:12:20.149 Set Features Save Field: Supported 00:12:20.149 Reservations: Not Supported 00:12:20.149 Timestamp: Supported 00:12:20.149 Copy: Supported 00:12:20.149 Volatile Write Cache: Present 00:12:20.149 Atomic Write Unit (Normal): 1 00:12:20.149 Atomic Write Unit (PFail): 1 00:12:20.149 Atomic Compare & Write Unit: 1 00:12:20.149 Fused Compare & Write: Not Supported 00:12:20.149 Scatter-Gather List 00:12:20.149 SGL Command Set: Supported 00:12:20.149 SGL Keyed: Not Supported 00:12:20.149 SGL Bit Bucket Descriptor: Not Supported 00:12:20.150 SGL Metadata Pointer: Not Supported 00:12:20.150 Oversized SGL: Not Supported 00:12:20.150 SGL Metadata Address: Not Supported 00:12:20.150 SGL Offset: Not Supported 00:12:20.150 Transport SGL Data Block: Not Supported 00:12:20.150 Replay Protected Memory Block: Not Supported 00:12:20.150 00:12:20.150 Firmware Slot Information 00:12:20.150 ========================= 00:12:20.150 Active slot: 1 00:12:20.150 Slot 1 Firmware Revision: 1.0 00:12:20.150 00:12:20.150 00:12:20.150 Commands Supported and Effects 00:12:20.150 ============================== 00:12:20.150 Admin Commands 00:12:20.150 -------------- 00:12:20.150 Delete I/O Submission Queue (00h): Supported 00:12:20.150 Create I/O Submission Queue (01h): Supported 00:12:20.150 Get Log Page (02h): Supported 00:12:20.150 Delete I/O Completion Queue (04h): Supported 00:12:20.150 Create I/O Completion Queue (05h): Supported 00:12:20.150 Identify (06h): Supported 00:12:20.150 Abort (08h): Supported 00:12:20.150 Set Features (09h): Supported 00:12:20.150 Get Features (0Ah): Supported 00:12:20.150 Asynchronous Event Request (0Ch): Supported 00:12:20.150 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:20.150 Directive Send (19h): Supported 00:12:20.150 Directive Receive (1Ah): Supported 00:12:20.150 Virtualization Management (1Ch): Supported 00:12:20.150 Doorbell Buffer Config (7Ch): Supported 00:12:20.150 Format NVM (80h): Supported LBA-Change 00:12:20.150 I/O Commands 00:12:20.150 ------------ 00:12:20.150 Flush (00h): Supported LBA-Change 00:12:20.150 Write (01h): Supported LBA-Change 00:12:20.150 Read (02h): Supported 00:12:20.150 Compare (05h): Supported 00:12:20.150 Write Zeroes (08h): Supported LBA-Change 00:12:20.150 Dataset Management (09h): Supported LBA-Change 00:12:20.150 Unknown (0Ch): Supported 00:12:20.150 Unknown (12h): Supported 00:12:20.150 Copy (19h): Supported LBA-Change 00:12:20.150 Unknown (1Dh): Supported LBA-Change 00:12:20.150 00:12:20.150 Error Log 00:12:20.150 ========= 00:12:20.150 00:12:20.150 Arbitration 00:12:20.150 =========== 00:12:20.150 Arbitration Burst: no limit 00:12:20.150 00:12:20.150 Power Management 00:12:20.150 ================ 00:12:20.150 Number of Power States: 1 00:12:20.150 Current Power State: Power State #0 00:12:20.150 Power State #0: 00:12:20.150 Max Power: 25.00 W 00:12:20.150 Non-Operational State: Operational 00:12:20.150 Entry Latency: 16 microseconds 00:12:20.150 Exit Latency: 4 microseconds 00:12:20.150 Relative Read Throughput: 0 00:12:20.150 Relative Read Latency: 0 00:12:20.150 Relative Write Throughput: 0 00:12:20.150 Relative Write Latency: 0 00:12:20.150 Idle Power: Not Reported 00:12:20.150 Active Power: Not Reported 00:12:20.150 Non-Operational Permissive Mode: Not Supported 00:12:20.150 00:12:20.150 Health Information 00:12:20.150 ================== 00:12:20.150 Critical Warnings: 00:12:20.150 Available Spare Space: OK 00:12:20.150 Temperature: OK 00:12:20.150 Device Reliability: OK 00:12:20.150 Read Only: No 00:12:20.150 Volatile Memory Backup: OK 00:12:20.150 Current Temperature: 323 Kelvin (50 Celsius) 00:12:20.150 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:20.150 Available Spare: 0% 00:12:20.150 Available Spare Threshold: 0% 00:12:20.150 Life Percentage Used: 0% 00:12:20.150 Data Units Read: 667 00:12:20.150 Data Units Written: 595 00:12:20.150 Host Read Commands: 31987 00:12:20.150 Host Write Commands: 31773 00:12:20.150 Controller Busy Time: 0 minutes 00:12:20.150 Power Cycles: 0 00:12:20.150 Power On Hours: 0 hours 00:12:20.150 Unsafe Shutdowns: 0 00:12:20.150 Unrecoverable Media Errors: 0 00:12:20.150 Lifetime Error Log Entries: 0 00:12:20.150 Warning Temperature Time: 0 minutes 00:12:20.150 Critical Temperature Time: 0 minutes 00:12:20.150 00:12:20.150 Number of Queues 00:12:20.150 ================ 00:12:20.150 Number of I/O Submission Queues: 64 00:12:20.150 Number of I/O Completion Queues: 64 00:12:20.150 00:12:20.150 ZNS Specific Controller Data 00:12:20.150 ============================ 00:12:20.150 Zone Append Size Limit: 0 00:12:20.150 00:12:20.150 00:12:20.150 Active Namespaces 00:12:20.150 ================= 00:12:20.150 Namespace ID:1 00:12:20.150 Error Recovery Timeout: Unlimited 00:12:20.150 Command Set Identifier: NVM (00h) 00:12:20.150 Deallocate: Supported 00:12:20.150 Deallocated/Unwritten Error: Supported 00:12:20.150 Deallocated Read Value: All 0x00 00:12:20.150 Deallocate in Write Zeroes: Not Supported 00:12:20.150 Deallocated Guard Field: 0xFFFF 00:12:20.150 Flush: Supported 00:12:20.150 Reservation: Not Supported 00:12:20.150 Metadata Transferred as: Separate Metadata Buffer 00:12:20.150 Namespace Sharing Capabilities: Private 00:12:20.150 Size (in LBAs): 1548666 (5GiB) 00:12:20.150 Capacity (in LBAs): 1548666 (5GiB) 00:12:20.150 Utilization (in LBAs): 1548666 (5GiB) 00:12:20.150 Thin Provisioning: Not Supported 00:12:20.150 Per-NS Atomic Units: No 00:12:20.150 Maximum Single Source Range Length: 128 00:12:20.150 Maximum Copy Length: 128 00:12:20.150 Maximum Source Range Count: 128 00:12:20.150 NGUID/EUI64 Never Reused: No 00:12:20.150 Namespace Write Protected: No 00:12:20.150 Number of LBA Formats: 8 00:12:20.150 Current LBA Format: LBA Format #07 00:12:20.150 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:20.150 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:20.150 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:20.150 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:20.150 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:20.150 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:20.150 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:20.150 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:20.150 00:12:20.150 NVM Specific Namespace Data 00:12:20.150 =========================== 00:12:20.150 Logical Block Storage Tag Mask: 0 00:12:20.150 Protection Information Capabilities: 00:12:20.150 16b Guard Protection Information Storage Tag Support: No 00:12:20.150 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:20.150 Storage Tag Check Read Support: No 00:12:20.150 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.150 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.150 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.150 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.150 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.150 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.150 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.150 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.407 11:26:25 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:20.407 11:26:25 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:12:20.667 ===================================================== 00:12:20.667 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:20.667 ===================================================== 00:12:20.667 Controller Capabilities/Features 00:12:20.667 ================================ 00:12:20.667 Vendor ID: 1b36 00:12:20.667 Subsystem Vendor ID: 1af4 00:12:20.667 Serial Number: 12341 00:12:20.667 Model Number: QEMU NVMe Ctrl 00:12:20.667 Firmware Version: 8.0.0 00:12:20.667 Recommended Arb Burst: 6 00:12:20.667 IEEE OUI Identifier: 00 54 52 00:12:20.667 Multi-path I/O 00:12:20.667 May have multiple subsystem ports: No 00:12:20.667 May have multiple controllers: No 00:12:20.667 Associated with SR-IOV VF: No 00:12:20.667 Max Data Transfer Size: 524288 00:12:20.667 Max Number of Namespaces: 256 00:12:20.667 Max Number of I/O Queues: 64 00:12:20.667 NVMe Specification Version (VS): 1.4 00:12:20.667 NVMe Specification Version (Identify): 1.4 00:12:20.667 Maximum Queue Entries: 2048 00:12:20.667 Contiguous Queues Required: Yes 00:12:20.667 Arbitration Mechanisms Supported 00:12:20.667 Weighted Round Robin: Not Supported 00:12:20.667 Vendor Specific: Not Supported 00:12:20.667 Reset Timeout: 7500 ms 00:12:20.667 Doorbell Stride: 4 bytes 00:12:20.667 NVM Subsystem Reset: Not Supported 00:12:20.667 Command Sets Supported 00:12:20.667 NVM Command Set: Supported 00:12:20.667 Boot Partition: Not Supported 00:12:20.667 Memory Page Size Minimum: 4096 bytes 00:12:20.667 Memory Page Size Maximum: 65536 bytes 00:12:20.667 Persistent Memory Region: Not Supported 00:12:20.667 Optional Asynchronous Events Supported 00:12:20.667 Namespace Attribute Notices: Supported 00:12:20.667 Firmware Activation Notices: Not Supported 00:12:20.667 ANA Change Notices: Not Supported 00:12:20.667 PLE Aggregate Log Change Notices: Not Supported 00:12:20.667 LBA Status Info Alert Notices: Not Supported 00:12:20.667 EGE Aggregate Log Change Notices: Not Supported 00:12:20.667 Normal NVM Subsystem Shutdown event: Not Supported 00:12:20.667 Zone Descriptor Change Notices: Not Supported 00:12:20.667 Discovery Log Change Notices: Not Supported 00:12:20.667 Controller Attributes 00:12:20.667 128-bit Host Identifier: Not Supported 00:12:20.667 Non-Operational Permissive Mode: Not Supported 00:12:20.667 NVM Sets: Not Supported 00:12:20.667 Read Recovery Levels: Not Supported 00:12:20.667 Endurance Groups: Not Supported 00:12:20.667 Predictable Latency Mode: Not Supported 00:12:20.667 Traffic Based Keep ALive: Not Supported 00:12:20.667 Namespace Granularity: Not Supported 00:12:20.667 SQ Associations: Not Supported 00:12:20.667 UUID List: Not Supported 00:12:20.667 Multi-Domain Subsystem: Not Supported 00:12:20.667 Fixed Capacity Management: Not Supported 00:12:20.667 Variable Capacity Management: Not Supported 00:12:20.667 Delete Endurance Group: Not Supported 00:12:20.667 Delete NVM Set: Not Supported 00:12:20.667 Extended LBA Formats Supported: Supported 00:12:20.667 Flexible Data Placement Supported: Not Supported 00:12:20.667 00:12:20.667 Controller Memory Buffer Support 00:12:20.667 ================================ 00:12:20.667 Supported: No 00:12:20.667 00:12:20.667 Persistent Memory Region Support 00:12:20.667 ================================ 00:12:20.667 Supported: No 00:12:20.667 00:12:20.667 Admin Command Set Attributes 00:12:20.667 ============================ 00:12:20.667 Security Send/Receive: Not Supported 00:12:20.667 Format NVM: Supported 00:12:20.667 Firmware Activate/Download: Not Supported 00:12:20.667 Namespace Management: Supported 00:12:20.667 Device Self-Test: Not Supported 00:12:20.667 Directives: Supported 00:12:20.667 NVMe-MI: Not Supported 00:12:20.667 Virtualization Management: Not Supported 00:12:20.667 Doorbell Buffer Config: Supported 00:12:20.667 Get LBA Status Capability: Not Supported 00:12:20.667 Command & Feature Lockdown Capability: Not Supported 00:12:20.667 Abort Command Limit: 4 00:12:20.667 Async Event Request Limit: 4 00:12:20.667 Number of Firmware Slots: N/A 00:12:20.667 Firmware Slot 1 Read-Only: N/A 00:12:20.667 Firmware Activation Without Reset: N/A 00:12:20.667 Multiple Update Detection Support: N/A 00:12:20.667 Firmware Update Granularity: No Information Provided 00:12:20.667 Per-Namespace SMART Log: Yes 00:12:20.667 Asymmetric Namespace Access Log Page: Not Supported 00:12:20.667 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:12:20.667 Command Effects Log Page: Supported 00:12:20.667 Get Log Page Extended Data: Supported 00:12:20.667 Telemetry Log Pages: Not Supported 00:12:20.667 Persistent Event Log Pages: Not Supported 00:12:20.667 Supported Log Pages Log Page: May Support 00:12:20.667 Commands Supported & Effects Log Page: Not Supported 00:12:20.667 Feature Identifiers & Effects Log Page:May Support 00:12:20.667 NVMe-MI Commands & Effects Log Page: May Support 00:12:20.667 Data Area 4 for Telemetry Log: Not Supported 00:12:20.667 Error Log Page Entries Supported: 1 00:12:20.667 Keep Alive: Not Supported 00:12:20.667 00:12:20.667 NVM Command Set Attributes 00:12:20.667 ========================== 00:12:20.667 Submission Queue Entry Size 00:12:20.667 Max: 64 00:12:20.667 Min: 64 00:12:20.667 Completion Queue Entry Size 00:12:20.667 Max: 16 00:12:20.667 Min: 16 00:12:20.667 Number of Namespaces: 256 00:12:20.667 Compare Command: Supported 00:12:20.667 Write Uncorrectable Command: Not Supported 00:12:20.667 Dataset Management Command: Supported 00:12:20.667 Write Zeroes Command: Supported 00:12:20.667 Set Features Save Field: Supported 00:12:20.667 Reservations: Not Supported 00:12:20.667 Timestamp: Supported 00:12:20.667 Copy: Supported 00:12:20.667 Volatile Write Cache: Present 00:12:20.667 Atomic Write Unit (Normal): 1 00:12:20.667 Atomic Write Unit (PFail): 1 00:12:20.667 Atomic Compare & Write Unit: 1 00:12:20.667 Fused Compare & Write: Not Supported 00:12:20.667 Scatter-Gather List 00:12:20.667 SGL Command Set: Supported 00:12:20.667 SGL Keyed: Not Supported 00:12:20.667 SGL Bit Bucket Descriptor: Not Supported 00:12:20.667 SGL Metadata Pointer: Not Supported 00:12:20.667 Oversized SGL: Not Supported 00:12:20.667 SGL Metadata Address: Not Supported 00:12:20.667 SGL Offset: Not Supported 00:12:20.667 Transport SGL Data Block: Not Supported 00:12:20.667 Replay Protected Memory Block: Not Supported 00:12:20.667 00:12:20.667 Firmware Slot Information 00:12:20.667 ========================= 00:12:20.667 Active slot: 1 00:12:20.667 Slot 1 Firmware Revision: 1.0 00:12:20.667 00:12:20.667 00:12:20.667 Commands Supported and Effects 00:12:20.667 ============================== 00:12:20.667 Admin Commands 00:12:20.667 -------------- 00:12:20.667 Delete I/O Submission Queue (00h): Supported 00:12:20.667 Create I/O Submission Queue (01h): Supported 00:12:20.667 Get Log Page (02h): Supported 00:12:20.667 Delete I/O Completion Queue (04h): Supported 00:12:20.667 Create I/O Completion Queue (05h): Supported 00:12:20.667 Identify (06h): Supported 00:12:20.667 Abort (08h): Supported 00:12:20.667 Set Features (09h): Supported 00:12:20.667 Get Features (0Ah): Supported 00:12:20.667 Asynchronous Event Request (0Ch): Supported 00:12:20.667 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:20.667 Directive Send (19h): Supported 00:12:20.667 Directive Receive (1Ah): Supported 00:12:20.667 Virtualization Management (1Ch): Supported 00:12:20.667 Doorbell Buffer Config (7Ch): Supported 00:12:20.667 Format NVM (80h): Supported LBA-Change 00:12:20.667 I/O Commands 00:12:20.667 ------------ 00:12:20.667 Flush (00h): Supported LBA-Change 00:12:20.667 Write (01h): Supported LBA-Change 00:12:20.667 Read (02h): Supported 00:12:20.667 Compare (05h): Supported 00:12:20.667 Write Zeroes (08h): Supported LBA-Change 00:12:20.667 Dataset Management (09h): Supported LBA-Change 00:12:20.667 Unknown (0Ch): Supported 00:12:20.667 Unknown (12h): Supported 00:12:20.667 Copy (19h): Supported LBA-Change 00:12:20.667 Unknown (1Dh): Supported LBA-Change 00:12:20.667 00:12:20.667 Error Log 00:12:20.667 ========= 00:12:20.668 00:12:20.668 Arbitration 00:12:20.668 =========== 00:12:20.668 Arbitration Burst: no limit 00:12:20.668 00:12:20.668 Power Management 00:12:20.668 ================ 00:12:20.668 Number of Power States: 1 00:12:20.668 Current Power State: Power State #0 00:12:20.668 Power State #0: 00:12:20.668 Max Power: 25.00 W 00:12:20.668 Non-Operational State: Operational 00:12:20.668 Entry Latency: 16 microseconds 00:12:20.668 Exit Latency: 4 microseconds 00:12:20.668 Relative Read Throughput: 0 00:12:20.668 Relative Read Latency: 0 00:12:20.668 Relative Write Throughput: 0 00:12:20.668 Relative Write Latency: 0 00:12:20.668 Idle Power: Not Reported 00:12:20.668 Active Power: Not Reported 00:12:20.668 Non-Operational Permissive Mode: Not Supported 00:12:20.668 00:12:20.668 Health Information 00:12:20.668 ================== 00:12:20.668 Critical Warnings: 00:12:20.668 Available Spare Space: OK 00:12:20.668 Temperature: OK 00:12:20.668 Device Reliability: OK 00:12:20.668 Read Only: No 00:12:20.668 Volatile Memory Backup: OK 00:12:20.668 Current Temperature: 323 Kelvin (50 Celsius) 00:12:20.668 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:20.668 Available Spare: 0% 00:12:20.668 Available Spare Threshold: 0% 00:12:20.668 Life Percentage Used: 0% 00:12:20.668 Data Units Read: 1003 00:12:20.668 Data Units Written: 870 00:12:20.668 Host Read Commands: 47730 00:12:20.668 Host Write Commands: 46524 00:12:20.668 Controller Busy Time: 0 minutes 00:12:20.668 Power Cycles: 0 00:12:20.668 Power On Hours: 0 hours 00:12:20.668 Unsafe Shutdowns: 0 00:12:20.668 Unrecoverable Media Errors: 0 00:12:20.668 Lifetime Error Log Entries: 0 00:12:20.668 Warning Temperature Time: 0 minutes 00:12:20.668 Critical Temperature Time: 0 minutes 00:12:20.668 00:12:20.668 Number of Queues 00:12:20.668 ================ 00:12:20.668 Number of I/O Submission Queues: 64 00:12:20.668 Number of I/O Completion Queues: 64 00:12:20.668 00:12:20.668 ZNS Specific Controller Data 00:12:20.668 ============================ 00:12:20.668 Zone Append Size Limit: 0 00:12:20.668 00:12:20.668 00:12:20.668 Active Namespaces 00:12:20.668 ================= 00:12:20.668 Namespace ID:1 00:12:20.668 Error Recovery Timeout: Unlimited 00:12:20.668 Command Set Identifier: NVM (00h) 00:12:20.668 Deallocate: Supported 00:12:20.668 Deallocated/Unwritten Error: Supported 00:12:20.668 Deallocated Read Value: All 0x00 00:12:20.668 Deallocate in Write Zeroes: Not Supported 00:12:20.668 Deallocated Guard Field: 0xFFFF 00:12:20.668 Flush: Supported 00:12:20.668 Reservation: Not Supported 00:12:20.668 Namespace Sharing Capabilities: Private 00:12:20.668 Size (in LBAs): 1310720 (5GiB) 00:12:20.668 Capacity (in LBAs): 1310720 (5GiB) 00:12:20.668 Utilization (in LBAs): 1310720 (5GiB) 00:12:20.668 Thin Provisioning: Not Supported 00:12:20.668 Per-NS Atomic Units: No 00:12:20.668 Maximum Single Source Range Length: 128 00:12:20.668 Maximum Copy Length: 128 00:12:20.668 Maximum Source Range Count: 128 00:12:20.668 NGUID/EUI64 Never Reused: No 00:12:20.668 Namespace Write Protected: No 00:12:20.668 Number of LBA Formats: 8 00:12:20.668 Current LBA Format: LBA Format #04 00:12:20.668 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:20.668 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:20.668 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:20.668 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:20.668 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:20.668 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:20.668 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:20.668 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:20.668 00:12:20.668 NVM Specific Namespace Data 00:12:20.668 =========================== 00:12:20.668 Logical Block Storage Tag Mask: 0 00:12:20.668 Protection Information Capabilities: 00:12:20.668 16b Guard Protection Information Storage Tag Support: No 00:12:20.668 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:20.668 Storage Tag Check Read Support: No 00:12:20.668 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.668 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.668 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.668 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.668 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.668 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.668 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.668 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.668 11:26:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:20.668 11:26:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:12:20.927 ===================================================== 00:12:20.927 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:20.927 ===================================================== 00:12:20.927 Controller Capabilities/Features 00:12:20.927 ================================ 00:12:20.927 Vendor ID: 1b36 00:12:20.927 Subsystem Vendor ID: 1af4 00:12:20.927 Serial Number: 12342 00:12:20.927 Model Number: QEMU NVMe Ctrl 00:12:20.927 Firmware Version: 8.0.0 00:12:20.927 Recommended Arb Burst: 6 00:12:20.927 IEEE OUI Identifier: 00 54 52 00:12:20.927 Multi-path I/O 00:12:20.927 May have multiple subsystem ports: No 00:12:20.927 May have multiple controllers: No 00:12:20.927 Associated with SR-IOV VF: No 00:12:20.927 Max Data Transfer Size: 524288 00:12:20.927 Max Number of Namespaces: 256 00:12:20.927 Max Number of I/O Queues: 64 00:12:20.927 NVMe Specification Version (VS): 1.4 00:12:20.927 NVMe Specification Version (Identify): 1.4 00:12:20.927 Maximum Queue Entries: 2048 00:12:20.927 Contiguous Queues Required: Yes 00:12:20.927 Arbitration Mechanisms Supported 00:12:20.927 Weighted Round Robin: Not Supported 00:12:20.927 Vendor Specific: Not Supported 00:12:20.927 Reset Timeout: 7500 ms 00:12:20.927 Doorbell Stride: 4 bytes 00:12:20.927 NVM Subsystem Reset: Not Supported 00:12:20.927 Command Sets Supported 00:12:20.927 NVM Command Set: Supported 00:12:20.927 Boot Partition: Not Supported 00:12:20.927 Memory Page Size Minimum: 4096 bytes 00:12:20.927 Memory Page Size Maximum: 65536 bytes 00:12:20.927 Persistent Memory Region: Not Supported 00:12:20.927 Optional Asynchronous Events Supported 00:12:20.927 Namespace Attribute Notices: Supported 00:12:20.927 Firmware Activation Notices: Not Supported 00:12:20.927 ANA Change Notices: Not Supported 00:12:20.927 PLE Aggregate Log Change Notices: Not Supported 00:12:20.927 LBA Status Info Alert Notices: Not Supported 00:12:20.927 EGE Aggregate Log Change Notices: Not Supported 00:12:20.927 Normal NVM Subsystem Shutdown event: Not Supported 00:12:20.927 Zone Descriptor Change Notices: Not Supported 00:12:20.927 Discovery Log Change Notices: Not Supported 00:12:20.927 Controller Attributes 00:12:20.927 128-bit Host Identifier: Not Supported 00:12:20.927 Non-Operational Permissive Mode: Not Supported 00:12:20.927 NVM Sets: Not Supported 00:12:20.927 Read Recovery Levels: Not Supported 00:12:20.927 Endurance Groups: Not Supported 00:12:20.927 Predictable Latency Mode: Not Supported 00:12:20.927 Traffic Based Keep ALive: Not Supported 00:12:20.927 Namespace Granularity: Not Supported 00:12:20.927 SQ Associations: Not Supported 00:12:20.927 UUID List: Not Supported 00:12:20.927 Multi-Domain Subsystem: Not Supported 00:12:20.927 Fixed Capacity Management: Not Supported 00:12:20.927 Variable Capacity Management: Not Supported 00:12:20.927 Delete Endurance Group: Not Supported 00:12:20.927 Delete NVM Set: Not Supported 00:12:20.927 Extended LBA Formats Supported: Supported 00:12:20.927 Flexible Data Placement Supported: Not Supported 00:12:20.927 00:12:20.927 Controller Memory Buffer Support 00:12:20.927 ================================ 00:12:20.927 Supported: No 00:12:20.927 00:12:20.927 Persistent Memory Region Support 00:12:20.927 ================================ 00:12:20.927 Supported: No 00:12:20.927 00:12:20.927 Admin Command Set Attributes 00:12:20.927 ============================ 00:12:20.927 Security Send/Receive: Not Supported 00:12:20.927 Format NVM: Supported 00:12:20.927 Firmware Activate/Download: Not Supported 00:12:20.927 Namespace Management: Supported 00:12:20.927 Device Self-Test: Not Supported 00:12:20.927 Directives: Supported 00:12:20.927 NVMe-MI: Not Supported 00:12:20.927 Virtualization Management: Not Supported 00:12:20.927 Doorbell Buffer Config: Supported 00:12:20.927 Get LBA Status Capability: Not Supported 00:12:20.927 Command & Feature Lockdown Capability: Not Supported 00:12:20.927 Abort Command Limit: 4 00:12:20.927 Async Event Request Limit: 4 00:12:20.927 Number of Firmware Slots: N/A 00:12:20.927 Firmware Slot 1 Read-Only: N/A 00:12:20.927 Firmware Activation Without Reset: N/A 00:12:20.927 Multiple Update Detection Support: N/A 00:12:20.927 Firmware Update Granularity: No Information Provided 00:12:20.927 Per-Namespace SMART Log: Yes 00:12:20.927 Asymmetric Namespace Access Log Page: Not Supported 00:12:20.927 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:12:20.927 Command Effects Log Page: Supported 00:12:20.927 Get Log Page Extended Data: Supported 00:12:20.927 Telemetry Log Pages: Not Supported 00:12:20.927 Persistent Event Log Pages: Not Supported 00:12:20.927 Supported Log Pages Log Page: May Support 00:12:20.927 Commands Supported & Effects Log Page: Not Supported 00:12:20.927 Feature Identifiers & Effects Log Page:May Support 00:12:20.927 NVMe-MI Commands & Effects Log Page: May Support 00:12:20.927 Data Area 4 for Telemetry Log: Not Supported 00:12:20.927 Error Log Page Entries Supported: 1 00:12:20.927 Keep Alive: Not Supported 00:12:20.927 00:12:20.927 NVM Command Set Attributes 00:12:20.927 ========================== 00:12:20.927 Submission Queue Entry Size 00:12:20.927 Max: 64 00:12:20.927 Min: 64 00:12:20.927 Completion Queue Entry Size 00:12:20.927 Max: 16 00:12:20.927 Min: 16 00:12:20.927 Number of Namespaces: 256 00:12:20.927 Compare Command: Supported 00:12:20.927 Write Uncorrectable Command: Not Supported 00:12:20.927 Dataset Management Command: Supported 00:12:20.927 Write Zeroes Command: Supported 00:12:20.927 Set Features Save Field: Supported 00:12:20.927 Reservations: Not Supported 00:12:20.927 Timestamp: Supported 00:12:20.927 Copy: Supported 00:12:20.927 Volatile Write Cache: Present 00:12:20.927 Atomic Write Unit (Normal): 1 00:12:20.927 Atomic Write Unit (PFail): 1 00:12:20.927 Atomic Compare & Write Unit: 1 00:12:20.927 Fused Compare & Write: Not Supported 00:12:20.927 Scatter-Gather List 00:12:20.927 SGL Command Set: Supported 00:12:20.927 SGL Keyed: Not Supported 00:12:20.927 SGL Bit Bucket Descriptor: Not Supported 00:12:20.927 SGL Metadata Pointer: Not Supported 00:12:20.927 Oversized SGL: Not Supported 00:12:20.927 SGL Metadata Address: Not Supported 00:12:20.927 SGL Offset: Not Supported 00:12:20.927 Transport SGL Data Block: Not Supported 00:12:20.927 Replay Protected Memory Block: Not Supported 00:12:20.927 00:12:20.927 Firmware Slot Information 00:12:20.927 ========================= 00:12:20.927 Active slot: 1 00:12:20.927 Slot 1 Firmware Revision: 1.0 00:12:20.927 00:12:20.927 00:12:20.927 Commands Supported and Effects 00:12:20.927 ============================== 00:12:20.927 Admin Commands 00:12:20.927 -------------- 00:12:20.927 Delete I/O Submission Queue (00h): Supported 00:12:20.927 Create I/O Submission Queue (01h): Supported 00:12:20.928 Get Log Page (02h): Supported 00:12:20.928 Delete I/O Completion Queue (04h): Supported 00:12:20.928 Create I/O Completion Queue (05h): Supported 00:12:20.928 Identify (06h): Supported 00:12:20.928 Abort (08h): Supported 00:12:20.928 Set Features (09h): Supported 00:12:20.928 Get Features (0Ah): Supported 00:12:20.928 Asynchronous Event Request (0Ch): Supported 00:12:20.928 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:20.928 Directive Send (19h): Supported 00:12:20.928 Directive Receive (1Ah): Supported 00:12:20.928 Virtualization Management (1Ch): Supported 00:12:20.928 Doorbell Buffer Config (7Ch): Supported 00:12:20.928 Format NVM (80h): Supported LBA-Change 00:12:20.928 I/O Commands 00:12:20.928 ------------ 00:12:20.928 Flush (00h): Supported LBA-Change 00:12:20.928 Write (01h): Supported LBA-Change 00:12:20.928 Read (02h): Supported 00:12:20.928 Compare (05h): Supported 00:12:20.928 Write Zeroes (08h): Supported LBA-Change 00:12:20.928 Dataset Management (09h): Supported LBA-Change 00:12:20.928 Unknown (0Ch): Supported 00:12:20.928 Unknown (12h): Supported 00:12:20.928 Copy (19h): Supported LBA-Change 00:12:20.928 Unknown (1Dh): Supported LBA-Change 00:12:20.928 00:12:20.928 Error Log 00:12:20.928 ========= 00:12:20.928 00:12:20.928 Arbitration 00:12:20.928 =========== 00:12:20.928 Arbitration Burst: no limit 00:12:20.928 00:12:20.928 Power Management 00:12:20.928 ================ 00:12:20.928 Number of Power States: 1 00:12:20.928 Current Power State: Power State #0 00:12:20.928 Power State #0: 00:12:20.928 Max Power: 25.00 W 00:12:20.928 Non-Operational State: Operational 00:12:20.928 Entry Latency: 16 microseconds 00:12:20.928 Exit Latency: 4 microseconds 00:12:20.928 Relative Read Throughput: 0 00:12:20.928 Relative Read Latency: 0 00:12:20.928 Relative Write Throughput: 0 00:12:20.928 Relative Write Latency: 0 00:12:20.928 Idle Power: Not Reported 00:12:20.928 Active Power: Not Reported 00:12:20.928 Non-Operational Permissive Mode: Not Supported 00:12:20.928 00:12:20.928 Health Information 00:12:20.928 ================== 00:12:20.928 Critical Warnings: 00:12:20.928 Available Spare Space: OK 00:12:20.928 Temperature: OK 00:12:20.928 Device Reliability: OK 00:12:20.928 Read Only: No 00:12:20.928 Volatile Memory Backup: OK 00:12:20.928 Current Temperature: 323 Kelvin (50 Celsius) 00:12:20.928 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:20.928 Available Spare: 0% 00:12:20.928 Available Spare Threshold: 0% 00:12:20.928 Life Percentage Used: 0% 00:12:20.928 Data Units Read: 2124 00:12:20.928 Data Units Written: 1911 00:12:20.928 Host Read Commands: 97415 00:12:20.928 Host Write Commands: 95684 00:12:20.928 Controller Busy Time: 0 minutes 00:12:20.928 Power Cycles: 0 00:12:20.928 Power On Hours: 0 hours 00:12:20.928 Unsafe Shutdowns: 0 00:12:20.928 Unrecoverable Media Errors: 0 00:12:20.928 Lifetime Error Log Entries: 0 00:12:20.928 Warning Temperature Time: 0 minutes 00:12:20.928 Critical Temperature Time: 0 minutes 00:12:20.928 00:12:20.928 Number of Queues 00:12:20.928 ================ 00:12:20.928 Number of I/O Submission Queues: 64 00:12:20.928 Number of I/O Completion Queues: 64 00:12:20.928 00:12:20.928 ZNS Specific Controller Data 00:12:20.928 ============================ 00:12:20.928 Zone Append Size Limit: 0 00:12:20.928 00:12:20.928 00:12:20.928 Active Namespaces 00:12:20.928 ================= 00:12:20.928 Namespace ID:1 00:12:20.928 Error Recovery Timeout: Unlimited 00:12:20.928 Command Set Identifier: NVM (00h) 00:12:20.928 Deallocate: Supported 00:12:20.928 Deallocated/Unwritten Error: Supported 00:12:20.928 Deallocated Read Value: All 0x00 00:12:20.928 Deallocate in Write Zeroes: Not Supported 00:12:20.928 Deallocated Guard Field: 0xFFFF 00:12:20.928 Flush: Supported 00:12:20.928 Reservation: Not Supported 00:12:20.928 Namespace Sharing Capabilities: Private 00:12:20.928 Size (in LBAs): 1048576 (4GiB) 00:12:20.928 Capacity (in LBAs): 1048576 (4GiB) 00:12:20.928 Utilization (in LBAs): 1048576 (4GiB) 00:12:20.928 Thin Provisioning: Not Supported 00:12:20.928 Per-NS Atomic Units: No 00:12:20.928 Maximum Single Source Range Length: 128 00:12:20.928 Maximum Copy Length: 128 00:12:20.928 Maximum Source Range Count: 128 00:12:20.928 NGUID/EUI64 Never Reused: No 00:12:20.928 Namespace Write Protected: No 00:12:20.928 Number of LBA Formats: 8 00:12:20.928 Current LBA Format: LBA Format #04 00:12:20.928 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:20.928 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:20.928 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:20.928 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:20.928 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:20.928 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:20.928 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:20.928 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:20.928 00:12:20.928 NVM Specific Namespace Data 00:12:20.928 =========================== 00:12:20.928 Logical Block Storage Tag Mask: 0 00:12:20.928 Protection Information Capabilities: 00:12:20.928 16b Guard Protection Information Storage Tag Support: No 00:12:20.928 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:20.928 Storage Tag Check Read Support: No 00:12:20.928 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.928 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.928 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.928 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.928 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.928 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.928 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.928 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.928 Namespace ID:2 00:12:20.928 Error Recovery Timeout: Unlimited 00:12:20.928 Command Set Identifier: NVM (00h) 00:12:20.928 Deallocate: Supported 00:12:20.928 Deallocated/Unwritten Error: Supported 00:12:20.928 Deallocated Read Value: All 0x00 00:12:20.928 Deallocate in Write Zeroes: Not Supported 00:12:20.928 Deallocated Guard Field: 0xFFFF 00:12:20.928 Flush: Supported 00:12:20.928 Reservation: Not Supported 00:12:20.928 Namespace Sharing Capabilities: Private 00:12:20.928 Size (in LBAs): 1048576 (4GiB) 00:12:20.928 Capacity (in LBAs): 1048576 (4GiB) 00:12:20.928 Utilization (in LBAs): 1048576 (4GiB) 00:12:20.928 Thin Provisioning: Not Supported 00:12:20.928 Per-NS Atomic Units: No 00:12:20.928 Maximum Single Source Range Length: 128 00:12:20.928 Maximum Copy Length: 128 00:12:20.928 Maximum Source Range Count: 128 00:12:20.928 NGUID/EUI64 Never Reused: No 00:12:20.928 Namespace Write Protected: No 00:12:20.928 Number of LBA Formats: 8 00:12:20.928 Current LBA Format: LBA Format #04 00:12:20.928 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:20.928 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:20.928 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:20.928 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:20.928 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:20.928 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:20.928 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:20.928 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:20.928 00:12:20.928 NVM Specific Namespace Data 00:12:20.928 =========================== 00:12:20.928 Logical Block Storage Tag Mask: 0 00:12:20.928 Protection Information Capabilities: 00:12:20.928 16b Guard Protection Information Storage Tag Support: No 00:12:20.928 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:20.928 Storage Tag Check Read Support: No 00:12:20.928 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.928 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.928 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.928 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.928 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.928 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.928 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.928 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:20.928 Namespace ID:3 00:12:20.928 Error Recovery Timeout: Unlimited 00:12:20.929 Command Set Identifier: NVM (00h) 00:12:20.929 Deallocate: Supported 00:12:20.929 Deallocated/Unwritten Error: Supported 00:12:20.929 Deallocated Read Value: All 0x00 00:12:20.929 Deallocate in Write Zeroes: Not Supported 00:12:20.929 Deallocated Guard Field: 0xFFFF 00:12:20.929 Flush: Supported 00:12:20.929 Reservation: Not Supported 00:12:20.929 Namespace Sharing Capabilities: Private 00:12:20.929 Size (in LBAs): 1048576 (4GiB) 00:12:20.929 Capacity (in LBAs): 1048576 (4GiB) 00:12:20.929 Utilization (in LBAs): 1048576 (4GiB) 00:12:20.929 Thin Provisioning: Not Supported 00:12:20.929 Per-NS Atomic Units: No 00:12:20.929 Maximum Single Source Range Length: 128 00:12:20.929 Maximum Copy Length: 128 00:12:20.929 Maximum Source Range Count: 128 00:12:20.929 NGUID/EUI64 Never Reused: No 00:12:20.929 Namespace Write Protected: No 00:12:20.929 Number of LBA Formats: 8 00:12:20.929 Current LBA Format: LBA Format #04 00:12:20.929 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:20.929 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:20.929 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:20.929 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:20.929 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:20.929 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:20.929 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:20.929 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:20.929 00:12:20.929 NVM Specific Namespace Data 00:12:20.929 =========================== 00:12:20.929 Logical Block Storage Tag Mask: 0 00:12:20.929 Protection Information Capabilities: 00:12:20.929 16b Guard Protection Information Storage Tag Support: No 00:12:20.929 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:21.188 Storage Tag Check Read Support: No 00:12:21.188 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:21.188 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:21.188 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:21.188 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:21.188 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:21.188 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:21.188 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:21.188 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:21.188 11:26:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:21.188 11:26:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:12:21.448 ===================================================== 00:12:21.448 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:21.448 ===================================================== 00:12:21.448 Controller Capabilities/Features 00:12:21.448 ================================ 00:12:21.448 Vendor ID: 1b36 00:12:21.448 Subsystem Vendor ID: 1af4 00:12:21.448 Serial Number: 12343 00:12:21.448 Model Number: QEMU NVMe Ctrl 00:12:21.448 Firmware Version: 8.0.0 00:12:21.448 Recommended Arb Burst: 6 00:12:21.448 IEEE OUI Identifier: 00 54 52 00:12:21.448 Multi-path I/O 00:12:21.448 May have multiple subsystem ports: No 00:12:21.448 May have multiple controllers: Yes 00:12:21.448 Associated with SR-IOV VF: No 00:12:21.448 Max Data Transfer Size: 524288 00:12:21.448 Max Number of Namespaces: 256 00:12:21.448 Max Number of I/O Queues: 64 00:12:21.448 NVMe Specification Version (VS): 1.4 00:12:21.448 NVMe Specification Version (Identify): 1.4 00:12:21.448 Maximum Queue Entries: 2048 00:12:21.448 Contiguous Queues Required: Yes 00:12:21.448 Arbitration Mechanisms Supported 00:12:21.448 Weighted Round Robin: Not Supported 00:12:21.448 Vendor Specific: Not Supported 00:12:21.448 Reset Timeout: 7500 ms 00:12:21.448 Doorbell Stride: 4 bytes 00:12:21.448 NVM Subsystem Reset: Not Supported 00:12:21.448 Command Sets Supported 00:12:21.448 NVM Command Set: Supported 00:12:21.448 Boot Partition: Not Supported 00:12:21.448 Memory Page Size Minimum: 4096 bytes 00:12:21.448 Memory Page Size Maximum: 65536 bytes 00:12:21.448 Persistent Memory Region: Not Supported 00:12:21.448 Optional Asynchronous Events Supported 00:12:21.448 Namespace Attribute Notices: Supported 00:12:21.448 Firmware Activation Notices: Not Supported 00:12:21.448 ANA Change Notices: Not Supported 00:12:21.448 PLE Aggregate Log Change Notices: Not Supported 00:12:21.448 LBA Status Info Alert Notices: Not Supported 00:12:21.448 EGE Aggregate Log Change Notices: Not Supported 00:12:21.448 Normal NVM Subsystem Shutdown event: Not Supported 00:12:21.448 Zone Descriptor Change Notices: Not Supported 00:12:21.448 Discovery Log Change Notices: Not Supported 00:12:21.448 Controller Attributes 00:12:21.448 128-bit Host Identifier: Not Supported 00:12:21.448 Non-Operational Permissive Mode: Not Supported 00:12:21.449 NVM Sets: Not Supported 00:12:21.449 Read Recovery Levels: Not Supported 00:12:21.449 Endurance Groups: Supported 00:12:21.449 Predictable Latency Mode: Not Supported 00:12:21.449 Traffic Based Keep ALive: Not Supported 00:12:21.449 Namespace Granularity: Not Supported 00:12:21.449 SQ Associations: Not Supported 00:12:21.449 UUID List: Not Supported 00:12:21.449 Multi-Domain Subsystem: Not Supported 00:12:21.449 Fixed Capacity Management: Not Supported 00:12:21.449 Variable Capacity Management: Not Supported 00:12:21.449 Delete Endurance Group: Not Supported 00:12:21.449 Delete NVM Set: Not Supported 00:12:21.449 Extended LBA Formats Supported: Supported 00:12:21.449 Flexible Data Placement Supported: Supported 00:12:21.449 00:12:21.449 Controller Memory Buffer Support 00:12:21.449 ================================ 00:12:21.449 Supported: No 00:12:21.449 00:12:21.449 Persistent Memory Region Support 00:12:21.449 ================================ 00:12:21.449 Supported: No 00:12:21.449 00:12:21.449 Admin Command Set Attributes 00:12:21.449 ============================ 00:12:21.449 Security Send/Receive: Not Supported 00:12:21.449 Format NVM: Supported 00:12:21.449 Firmware Activate/Download: Not Supported 00:12:21.449 Namespace Management: Supported 00:12:21.449 Device Self-Test: Not Supported 00:12:21.449 Directives: Supported 00:12:21.449 NVMe-MI: Not Supported 00:12:21.449 Virtualization Management: Not Supported 00:12:21.449 Doorbell Buffer Config: Supported 00:12:21.449 Get LBA Status Capability: Not Supported 00:12:21.449 Command & Feature Lockdown Capability: Not Supported 00:12:21.449 Abort Command Limit: 4 00:12:21.449 Async Event Request Limit: 4 00:12:21.449 Number of Firmware Slots: N/A 00:12:21.449 Firmware Slot 1 Read-Only: N/A 00:12:21.449 Firmware Activation Without Reset: N/A 00:12:21.449 Multiple Update Detection Support: N/A 00:12:21.449 Firmware Update Granularity: No Information Provided 00:12:21.449 Per-Namespace SMART Log: Yes 00:12:21.449 Asymmetric Namespace Access Log Page: Not Supported 00:12:21.449 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:12:21.449 Command Effects Log Page: Supported 00:12:21.449 Get Log Page Extended Data: Supported 00:12:21.449 Telemetry Log Pages: Not Supported 00:12:21.449 Persistent Event Log Pages: Not Supported 00:12:21.449 Supported Log Pages Log Page: May Support 00:12:21.449 Commands Supported & Effects Log Page: Not Supported 00:12:21.449 Feature Identifiers & Effects Log Page:May Support 00:12:21.449 NVMe-MI Commands & Effects Log Page: May Support 00:12:21.449 Data Area 4 for Telemetry Log: Not Supported 00:12:21.449 Error Log Page Entries Supported: 1 00:12:21.449 Keep Alive: Not Supported 00:12:21.449 00:12:21.449 NVM Command Set Attributes 00:12:21.449 ========================== 00:12:21.449 Submission Queue Entry Size 00:12:21.449 Max: 64 00:12:21.449 Min: 64 00:12:21.449 Completion Queue Entry Size 00:12:21.449 Max: 16 00:12:21.449 Min: 16 00:12:21.449 Number of Namespaces: 256 00:12:21.449 Compare Command: Supported 00:12:21.449 Write Uncorrectable Command: Not Supported 00:12:21.449 Dataset Management Command: Supported 00:12:21.449 Write Zeroes Command: Supported 00:12:21.449 Set Features Save Field: Supported 00:12:21.449 Reservations: Not Supported 00:12:21.449 Timestamp: Supported 00:12:21.449 Copy: Supported 00:12:21.449 Volatile Write Cache: Present 00:12:21.449 Atomic Write Unit (Normal): 1 00:12:21.449 Atomic Write Unit (PFail): 1 00:12:21.449 Atomic Compare & Write Unit: 1 00:12:21.449 Fused Compare & Write: Not Supported 00:12:21.449 Scatter-Gather List 00:12:21.449 SGL Command Set: Supported 00:12:21.449 SGL Keyed: Not Supported 00:12:21.449 SGL Bit Bucket Descriptor: Not Supported 00:12:21.449 SGL Metadata Pointer: Not Supported 00:12:21.449 Oversized SGL: Not Supported 00:12:21.449 SGL Metadata Address: Not Supported 00:12:21.449 SGL Offset: Not Supported 00:12:21.449 Transport SGL Data Block: Not Supported 00:12:21.449 Replay Protected Memory Block: Not Supported 00:12:21.449 00:12:21.449 Firmware Slot Information 00:12:21.449 ========================= 00:12:21.449 Active slot: 1 00:12:21.449 Slot 1 Firmware Revision: 1.0 00:12:21.449 00:12:21.449 00:12:21.449 Commands Supported and Effects 00:12:21.449 ============================== 00:12:21.449 Admin Commands 00:12:21.449 -------------- 00:12:21.449 Delete I/O Submission Queue (00h): Supported 00:12:21.449 Create I/O Submission Queue (01h): Supported 00:12:21.449 Get Log Page (02h): Supported 00:12:21.449 Delete I/O Completion Queue (04h): Supported 00:12:21.449 Create I/O Completion Queue (05h): Supported 00:12:21.449 Identify (06h): Supported 00:12:21.449 Abort (08h): Supported 00:12:21.449 Set Features (09h): Supported 00:12:21.449 Get Features (0Ah): Supported 00:12:21.449 Asynchronous Event Request (0Ch): Supported 00:12:21.449 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:21.449 Directive Send (19h): Supported 00:12:21.449 Directive Receive (1Ah): Supported 00:12:21.449 Virtualization Management (1Ch): Supported 00:12:21.449 Doorbell Buffer Config (7Ch): Supported 00:12:21.449 Format NVM (80h): Supported LBA-Change 00:12:21.449 I/O Commands 00:12:21.449 ------------ 00:12:21.449 Flush (00h): Supported LBA-Change 00:12:21.449 Write (01h): Supported LBA-Change 00:12:21.449 Read (02h): Supported 00:12:21.449 Compare (05h): Supported 00:12:21.449 Write Zeroes (08h): Supported LBA-Change 00:12:21.449 Dataset Management (09h): Supported LBA-Change 00:12:21.449 Unknown (0Ch): Supported 00:12:21.449 Unknown (12h): Supported 00:12:21.449 Copy (19h): Supported LBA-Change 00:12:21.449 Unknown (1Dh): Supported LBA-Change 00:12:21.449 00:12:21.449 Error Log 00:12:21.449 ========= 00:12:21.449 00:12:21.449 Arbitration 00:12:21.449 =========== 00:12:21.449 Arbitration Burst: no limit 00:12:21.449 00:12:21.449 Power Management 00:12:21.449 ================ 00:12:21.449 Number of Power States: 1 00:12:21.449 Current Power State: Power State #0 00:12:21.449 Power State #0: 00:12:21.449 Max Power: 25.00 W 00:12:21.449 Non-Operational State: Operational 00:12:21.449 Entry Latency: 16 microseconds 00:12:21.449 Exit Latency: 4 microseconds 00:12:21.449 Relative Read Throughput: 0 00:12:21.449 Relative Read Latency: 0 00:12:21.449 Relative Write Throughput: 0 00:12:21.449 Relative Write Latency: 0 00:12:21.449 Idle Power: Not Reported 00:12:21.449 Active Power: Not Reported 00:12:21.449 Non-Operational Permissive Mode: Not Supported 00:12:21.449 00:12:21.449 Health Information 00:12:21.449 ================== 00:12:21.449 Critical Warnings: 00:12:21.449 Available Spare Space: OK 00:12:21.449 Temperature: OK 00:12:21.449 Device Reliability: OK 00:12:21.449 Read Only: No 00:12:21.449 Volatile Memory Backup: OK 00:12:21.449 Current Temperature: 323 Kelvin (50 Celsius) 00:12:21.449 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:21.449 Available Spare: 0% 00:12:21.449 Available Spare Threshold: 0% 00:12:21.449 Life Percentage Used: 0% 00:12:21.449 Data Units Read: 740 00:12:21.449 Data Units Written: 669 00:12:21.449 Host Read Commands: 32921 00:12:21.449 Host Write Commands: 32344 00:12:21.449 Controller Busy Time: 0 minutes 00:12:21.449 Power Cycles: 0 00:12:21.449 Power On Hours: 0 hours 00:12:21.449 Unsafe Shutdowns: 0 00:12:21.449 Unrecoverable Media Errors: 0 00:12:21.449 Lifetime Error Log Entries: 0 00:12:21.449 Warning Temperature Time: 0 minutes 00:12:21.449 Critical Temperature Time: 0 minutes 00:12:21.449 00:12:21.449 Number of Queues 00:12:21.449 ================ 00:12:21.449 Number of I/O Submission Queues: 64 00:12:21.449 Number of I/O Completion Queues: 64 00:12:21.449 00:12:21.449 ZNS Specific Controller Data 00:12:21.449 ============================ 00:12:21.449 Zone Append Size Limit: 0 00:12:21.449 00:12:21.449 00:12:21.449 Active Namespaces 00:12:21.449 ================= 00:12:21.449 Namespace ID:1 00:12:21.449 Error Recovery Timeout: Unlimited 00:12:21.449 Command Set Identifier: NVM (00h) 00:12:21.449 Deallocate: Supported 00:12:21.449 Deallocated/Unwritten Error: Supported 00:12:21.449 Deallocated Read Value: All 0x00 00:12:21.449 Deallocate in Write Zeroes: Not Supported 00:12:21.449 Deallocated Guard Field: 0xFFFF 00:12:21.449 Flush: Supported 00:12:21.449 Reservation: Not Supported 00:12:21.449 Namespace Sharing Capabilities: Multiple Controllers 00:12:21.449 Size (in LBAs): 262144 (1GiB) 00:12:21.449 Capacity (in LBAs): 262144 (1GiB) 00:12:21.449 Utilization (in LBAs): 262144 (1GiB) 00:12:21.449 Thin Provisioning: Not Supported 00:12:21.450 Per-NS Atomic Units: No 00:12:21.450 Maximum Single Source Range Length: 128 00:12:21.450 Maximum Copy Length: 128 00:12:21.450 Maximum Source Range Count: 128 00:12:21.450 NGUID/EUI64 Never Reused: No 00:12:21.450 Namespace Write Protected: No 00:12:21.450 Endurance group ID: 1 00:12:21.450 Number of LBA Formats: 8 00:12:21.450 Current LBA Format: LBA Format #04 00:12:21.450 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:21.450 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:21.450 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:21.450 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:21.450 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:21.450 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:21.450 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:21.450 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:21.450 00:12:21.450 Get Feature FDP: 00:12:21.450 ================ 00:12:21.450 Enabled: Yes 00:12:21.450 FDP configuration index: 0 00:12:21.450 00:12:21.450 FDP configurations log page 00:12:21.450 =========================== 00:12:21.450 Number of FDP configurations: 1 00:12:21.450 Version: 0 00:12:21.450 Size: 112 00:12:21.450 FDP Configuration Descriptor: 0 00:12:21.450 Descriptor Size: 96 00:12:21.450 Reclaim Group Identifier format: 2 00:12:21.450 FDP Volatile Write Cache: Not Present 00:12:21.450 FDP Configuration: Valid 00:12:21.450 Vendor Specific Size: 0 00:12:21.450 Number of Reclaim Groups: 2 00:12:21.450 Number of Recalim Unit Handles: 8 00:12:21.450 Max Placement Identifiers: 128 00:12:21.450 Number of Namespaces Suppprted: 256 00:12:21.450 Reclaim unit Nominal Size: 6000000 bytes 00:12:21.450 Estimated Reclaim Unit Time Limit: Not Reported 00:12:21.450 RUH Desc #000: RUH Type: Initially Isolated 00:12:21.450 RUH Desc #001: RUH Type: Initially Isolated 00:12:21.450 RUH Desc #002: RUH Type: Initially Isolated 00:12:21.450 RUH Desc #003: RUH Type: Initially Isolated 00:12:21.450 RUH Desc #004: RUH Type: Initially Isolated 00:12:21.450 RUH Desc #005: RUH Type: Initially Isolated 00:12:21.450 RUH Desc #006: RUH Type: Initially Isolated 00:12:21.450 RUH Desc #007: RUH Type: Initially Isolated 00:12:21.450 00:12:21.450 FDP reclaim unit handle usage log page 00:12:21.450 ====================================== 00:12:21.450 Number of Reclaim Unit Handles: 8 00:12:21.450 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:21.450 RUH Usage Desc #001: RUH Attributes: Unused 00:12:21.450 RUH Usage Desc #002: RUH Attributes: Unused 00:12:21.450 RUH Usage Desc #003: RUH Attributes: Unused 00:12:21.450 RUH Usage Desc #004: RUH Attributes: Unused 00:12:21.450 RUH Usage Desc #005: RUH Attributes: Unused 00:12:21.450 RUH Usage Desc #006: RUH Attributes: Unused 00:12:21.450 RUH Usage Desc #007: RUH Attributes: Unused 00:12:21.450 00:12:21.450 FDP statistics log page 00:12:21.450 ======================= 00:12:21.450 Host bytes with metadata written: 419274752 00:12:21.450 Media bytes with metadata written: 419319808 00:12:21.450 Media bytes erased: 0 00:12:21.450 00:12:21.450 FDP events log page 00:12:21.450 =================== 00:12:21.450 Number of FDP events: 0 00:12:21.450 00:12:21.450 NVM Specific Namespace Data 00:12:21.450 =========================== 00:12:21.450 Logical Block Storage Tag Mask: 0 00:12:21.450 Protection Information Capabilities: 00:12:21.450 16b Guard Protection Information Storage Tag Support: No 00:12:21.450 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:21.450 Storage Tag Check Read Support: No 00:12:21.450 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:21.450 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:21.450 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:21.450 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:21.450 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:21.450 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:21.450 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:21.450 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:21.450 00:12:21.450 real 0m2.069s 00:12:21.450 user 0m0.786s 00:12:21.450 sys 0m1.009s 00:12:21.450 11:26:27 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.450 11:26:27 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:12:21.450 ************************************ 00:12:21.450 END TEST nvme_identify 00:12:21.450 ************************************ 00:12:21.450 11:26:27 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:12:21.450 11:26:27 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:21.450 11:26:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.450 11:26:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.450 ************************************ 00:12:21.450 START TEST nvme_perf 00:12:21.450 ************************************ 00:12:21.450 11:26:27 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:12:21.450 11:26:27 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:12:23.353 Initializing NVMe Controllers 00:12:23.353 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:23.353 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:23.353 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:23.353 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:23.353 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:23.353 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:23.353 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:23.353 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:23.353 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:23.353 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:23.353 Initialization complete. Launching workers. 00:12:23.353 ======================================================== 00:12:23.353 Latency(us) 00:12:23.353 Device Information : IOPS MiB/s Average min max 00:12:23.353 PCIE (0000:00:10.0) NSID 1 from core 0: 12373.57 145.00 10372.56 8133.10 56561.71 00:12:23.353 PCIE (0000:00:11.0) NSID 1 from core 0: 12373.57 145.00 10342.68 8232.99 52581.45 00:12:23.353 PCIE (0000:00:13.0) NSID 1 from core 0: 12373.57 145.00 10313.03 8275.60 49615.99 00:12:23.353 PCIE (0000:00:12.0) NSID 1 from core 0: 12373.57 145.00 10284.45 8205.41 46189.67 00:12:23.353 PCIE (0000:00:12.0) NSID 2 from core 0: 12373.57 145.00 10257.82 8181.31 43301.69 00:12:23.353 PCIE (0000:00:12.0) NSID 3 from core 0: 12373.57 145.00 10230.37 8241.18 39737.02 00:12:23.353 ======================================================== 00:12:23.353 Total : 74241.43 870.02 10300.15 8133.10 56561.71 00:12:23.353 00:12:23.353 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:23.353 ================================================================================= 00:12:23.353 1.00000% : 8426.057us 00:12:23.353 10.00000% : 8862.964us 00:12:23.353 25.00000% : 9299.870us 00:12:23.353 50.00000% : 9799.192us 00:12:23.353 75.00000% : 10485.760us 00:12:23.353 90.00000% : 11421.989us 00:12:23.353 95.00000% : 12295.802us 00:12:23.353 98.00000% : 13793.768us 00:12:23.353 99.00000% : 43441.006us 00:12:23.353 99.50000% : 52678.461us 00:12:23.353 99.90000% : 55924.053us 00:12:23.353 99.99000% : 56673.036us 00:12:23.353 99.99900% : 56673.036us 00:12:23.353 99.99990% : 56673.036us 00:12:23.353 99.99999% : 56673.036us 00:12:23.353 00:12:23.353 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:23.353 ================================================================================= 00:12:23.353 1.00000% : 8488.472us 00:12:23.353 10.00000% : 8925.379us 00:12:23.353 25.00000% : 9299.870us 00:12:23.353 50.00000% : 9799.192us 00:12:23.353 75.00000% : 10485.760us 00:12:23.353 90.00000% : 11359.573us 00:12:23.353 95.00000% : 12170.971us 00:12:23.353 98.00000% : 13606.522us 00:12:23.353 99.00000% : 40445.074us 00:12:23.353 99.50000% : 49432.869us 00:12:23.353 99.90000% : 52179.139us 00:12:23.353 99.99000% : 52678.461us 00:12:23.353 99.99900% : 52678.461us 00:12:23.353 99.99990% : 52678.461us 00:12:23.353 99.99999% : 52678.461us 00:12:23.353 00:12:23.353 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:23.353 ================================================================================= 00:12:23.353 1.00000% : 8550.888us 00:12:23.353 10.00000% : 8925.379us 00:12:23.353 25.00000% : 9299.870us 00:12:23.353 50.00000% : 9799.192us 00:12:23.353 75.00000% : 10485.760us 00:12:23.353 90.00000% : 11359.573us 00:12:23.353 95.00000% : 12233.387us 00:12:23.353 98.00000% : 13544.107us 00:12:23.353 99.00000% : 37449.143us 00:12:23.353 99.50000% : 46686.598us 00:12:23.353 99.90000% : 49183.208us 00:12:23.353 99.99000% : 49682.530us 00:12:23.353 99.99900% : 49682.530us 00:12:23.353 99.99990% : 49682.530us 00:12:23.353 99.99999% : 49682.530us 00:12:23.353 00:12:23.353 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:23.353 ================================================================================= 00:12:23.353 1.00000% : 8550.888us 00:12:23.353 10.00000% : 8925.379us 00:12:23.353 25.00000% : 9299.870us 00:12:23.353 50.00000% : 9799.192us 00:12:23.353 75.00000% : 10485.760us 00:12:23.353 90.00000% : 11359.573us 00:12:23.353 95.00000% : 12233.387us 00:12:23.353 98.00000% : 13793.768us 00:12:23.353 99.00000% : 34453.211us 00:12:23.353 99.50000% : 43441.006us 00:12:23.353 99.90000% : 45687.954us 00:12:23.353 99.99000% : 46187.276us 00:12:23.353 99.99900% : 46436.937us 00:12:23.353 99.99990% : 46436.937us 00:12:23.353 99.99999% : 46436.937us 00:12:23.354 00:12:23.354 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:23.354 ================================================================================= 00:12:23.354 1.00000% : 8488.472us 00:12:23.354 10.00000% : 8925.379us 00:12:23.354 25.00000% : 9299.870us 00:12:23.354 50.00000% : 9799.192us 00:12:23.354 75.00000% : 10485.760us 00:12:23.354 90.00000% : 11421.989us 00:12:23.354 95.00000% : 12233.387us 00:12:23.354 98.00000% : 14105.844us 00:12:23.354 99.00000% : 30957.958us 00:12:23.354 99.50000% : 40445.074us 00:12:23.354 99.90000% : 42941.684us 00:12:23.354 99.99000% : 43441.006us 00:12:23.354 99.99900% : 43441.006us 00:12:23.354 99.99990% : 43441.006us 00:12:23.354 99.99999% : 43441.006us 00:12:23.354 00:12:23.354 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:23.354 ================================================================================= 00:12:23.354 1.00000% : 8488.472us 00:12:23.354 10.00000% : 8925.379us 00:12:23.354 25.00000% : 9299.870us 00:12:23.354 50.00000% : 9799.192us 00:12:23.354 75.00000% : 10485.760us 00:12:23.354 90.00000% : 11421.989us 00:12:23.354 95.00000% : 12233.387us 00:12:23.354 98.00000% : 14542.750us 00:12:23.354 99.00000% : 27837.196us 00:12:23.354 99.50000% : 37199.482us 00:12:23.354 99.90000% : 39446.430us 00:12:23.354 99.99000% : 39945.752us 00:12:23.354 99.99900% : 39945.752us 00:12:23.354 99.99990% : 39945.752us 00:12:23.354 99.99999% : 39945.752us 00:12:23.354 00:12:23.354 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:23.354 ============================================================================== 00:12:23.354 Range in us Cumulative IO count 00:12:23.354 8113.981 - 8176.396: 0.0483% ( 6) 00:12:23.354 8176.396 - 8238.811: 0.2094% ( 20) 00:12:23.354 8238.811 - 8301.227: 0.3947% ( 23) 00:12:23.354 8301.227 - 8363.642: 0.8215% ( 53) 00:12:23.354 8363.642 - 8426.057: 1.3531% ( 66) 00:12:23.354 8426.057 - 8488.472: 2.0538% ( 87) 00:12:23.354 8488.472 - 8550.888: 3.0042% ( 118) 00:12:23.354 8550.888 - 8613.303: 4.0915% ( 135) 00:12:23.354 8613.303 - 8675.718: 5.4365% ( 167) 00:12:23.354 8675.718 - 8738.133: 6.9990% ( 194) 00:12:23.354 8738.133 - 8800.549: 8.8112% ( 225) 00:12:23.354 8800.549 - 8862.964: 10.7845% ( 245) 00:12:23.354 8862.964 - 8925.379: 12.8785% ( 260) 00:12:23.354 8925.379 - 8987.794: 14.8921% ( 250) 00:12:23.354 8987.794 - 9050.210: 17.2278% ( 290) 00:12:23.354 9050.210 - 9112.625: 19.5635% ( 290) 00:12:23.354 9112.625 - 9175.040: 22.0602% ( 310) 00:12:23.354 9175.040 - 9237.455: 24.5329% ( 307) 00:12:23.354 9237.455 - 9299.870: 27.1907% ( 330) 00:12:23.354 9299.870 - 9362.286: 29.9613% ( 344) 00:12:23.354 9362.286 - 9424.701: 32.7320% ( 344) 00:12:23.354 9424.701 - 9487.116: 35.7684% ( 377) 00:12:23.354 9487.116 - 9549.531: 38.7242% ( 367) 00:12:23.354 9549.531 - 9611.947: 41.8976% ( 394) 00:12:23.354 9611.947 - 9674.362: 45.0870% ( 396) 00:12:23.354 9674.362 - 9736.777: 48.1878% ( 385) 00:12:23.354 9736.777 - 9799.192: 51.0793% ( 359) 00:12:23.354 9799.192 - 9861.608: 53.9626% ( 358) 00:12:23.354 9861.608 - 9924.023: 56.7574% ( 347) 00:12:23.354 9924.023 - 9986.438: 59.4314% ( 332) 00:12:23.354 9986.438 - 10048.853: 61.9120% ( 308) 00:12:23.354 10048.853 - 10111.269: 64.3927% ( 308) 00:12:23.354 10111.269 - 10173.684: 66.7365% ( 291) 00:12:23.354 10173.684 - 10236.099: 68.7903% ( 255) 00:12:23.354 10236.099 - 10298.514: 70.7233% ( 240) 00:12:23.354 10298.514 - 10360.930: 72.6401% ( 238) 00:12:23.354 10360.930 - 10423.345: 74.2993% ( 206) 00:12:23.354 10423.345 - 10485.760: 75.8537% ( 193) 00:12:23.354 10485.760 - 10548.175: 77.3518% ( 186) 00:12:23.354 10548.175 - 10610.590: 78.8660% ( 188) 00:12:23.354 10610.590 - 10673.006: 80.1788% ( 163) 00:12:23.354 10673.006 - 10735.421: 81.5641% ( 172) 00:12:23.354 10735.421 - 10797.836: 82.7722% ( 150) 00:12:23.354 10797.836 - 10860.251: 83.8434% ( 133) 00:12:23.354 10860.251 - 10922.667: 84.8421% ( 124) 00:12:23.354 10922.667 - 10985.082: 85.8489% ( 125) 00:12:23.354 10985.082 - 11047.497: 86.7268% ( 109) 00:12:23.354 11047.497 - 11109.912: 87.5242% ( 99) 00:12:23.354 11109.912 - 11172.328: 88.1363% ( 76) 00:12:23.354 11172.328 - 11234.743: 88.8289% ( 86) 00:12:23.354 11234.743 - 11297.158: 89.4894% ( 82) 00:12:23.354 11297.158 - 11359.573: 89.9726% ( 60) 00:12:23.354 11359.573 - 11421.989: 90.5284% ( 69) 00:12:23.354 11421.989 - 11484.404: 90.9955% ( 58) 00:12:23.354 11484.404 - 11546.819: 91.4948% ( 62) 00:12:23.354 11546.819 - 11609.234: 91.9137% ( 52) 00:12:23.354 11609.234 - 11671.650: 92.2600% ( 43) 00:12:23.354 11671.650 - 11734.065: 92.6224% ( 45) 00:12:23.354 11734.065 - 11796.480: 92.9688% ( 43) 00:12:23.354 11796.480 - 11858.895: 93.2748% ( 38) 00:12:23.354 11858.895 - 11921.310: 93.6453% ( 46) 00:12:23.354 11921.310 - 11983.726: 93.9755% ( 41) 00:12:23.354 11983.726 - 12046.141: 94.2091% ( 29) 00:12:23.354 12046.141 - 12108.556: 94.4990% ( 36) 00:12:23.354 12108.556 - 12170.971: 94.7407% ( 30) 00:12:23.354 12170.971 - 12233.387: 94.9662% ( 28) 00:12:23.354 12233.387 - 12295.802: 95.2239% ( 32) 00:12:23.354 12295.802 - 12358.217: 95.4655% ( 30) 00:12:23.354 12358.217 - 12420.632: 95.6910% ( 28) 00:12:23.354 12420.632 - 12483.048: 95.9166% ( 28) 00:12:23.354 12483.048 - 12545.463: 96.1018% ( 23) 00:12:23.354 12545.463 - 12607.878: 96.3354% ( 29) 00:12:23.354 12607.878 - 12670.293: 96.5045% ( 21) 00:12:23.354 12670.293 - 12732.709: 96.6012% ( 12) 00:12:23.354 12732.709 - 12795.124: 96.7461% ( 18) 00:12:23.354 12795.124 - 12857.539: 96.8992% ( 19) 00:12:23.354 12857.539 - 12919.954: 97.0039% ( 13) 00:12:23.354 12919.954 - 12982.370: 97.1005% ( 12) 00:12:23.354 12982.370 - 13044.785: 97.1972% ( 12) 00:12:23.354 13044.785 - 13107.200: 97.2938% ( 12) 00:12:23.354 13107.200 - 13169.615: 97.3905% ( 12) 00:12:23.354 13169.615 - 13232.030: 97.4710% ( 10) 00:12:23.354 13232.030 - 13294.446: 97.5515% ( 10) 00:12:23.354 13294.446 - 13356.861: 97.6401% ( 11) 00:12:23.354 13356.861 - 13419.276: 97.7046% ( 8) 00:12:23.354 13419.276 - 13481.691: 97.7368% ( 4) 00:12:23.354 13481.691 - 13544.107: 97.8012% ( 8) 00:12:23.354 13544.107 - 13606.522: 97.8415% ( 5) 00:12:23.354 13606.522 - 13668.937: 97.8898% ( 6) 00:12:23.354 13668.937 - 13731.352: 97.9462% ( 7) 00:12:23.354 13731.352 - 13793.768: 98.0026% ( 7) 00:12:23.354 13793.768 - 13856.183: 98.0590% ( 7) 00:12:23.354 13856.183 - 13918.598: 98.1073% ( 6) 00:12:23.354 13918.598 - 13981.013: 98.1717% ( 8) 00:12:23.354 13981.013 - 14043.429: 98.2039% ( 4) 00:12:23.354 14043.429 - 14105.844: 98.2361% ( 4) 00:12:23.354 14105.844 - 14168.259: 98.2603% ( 3) 00:12:23.354 14168.259 - 14230.674: 98.2845% ( 3) 00:12:23.354 14230.674 - 14293.090: 98.3006% ( 2) 00:12:23.354 14293.090 - 14355.505: 98.3167% ( 2) 00:12:23.354 14355.505 - 14417.920: 98.3328% ( 2) 00:12:23.354 14417.920 - 14480.335: 98.3489% ( 2) 00:12:23.354 14480.335 - 14542.750: 98.3731% ( 3) 00:12:23.354 14542.750 - 14605.166: 98.3892% ( 2) 00:12:23.354 14605.166 - 14667.581: 98.4053% ( 2) 00:12:23.354 14667.581 - 14729.996: 98.4294% ( 3) 00:12:23.354 14729.996 - 14792.411: 98.4456% ( 2) 00:12:23.354 14792.411 - 14854.827: 98.4536% ( 1) 00:12:23.354 14854.827 - 14917.242: 98.4617% ( 1) 00:12:23.354 14917.242 - 14979.657: 98.4697% ( 1) 00:12:23.354 14979.657 - 15042.072: 98.4778% ( 1) 00:12:23.354 15042.072 - 15104.488: 98.4939% ( 2) 00:12:23.354 15104.488 - 15166.903: 98.5100% ( 2) 00:12:23.354 15166.903 - 15229.318: 98.5261% ( 2) 00:12:23.354 15229.318 - 15291.733: 98.5341% ( 1) 00:12:23.354 15291.733 - 15354.149: 98.5503% ( 2) 00:12:23.354 15354.149 - 15416.564: 98.5744% ( 3) 00:12:23.354 15416.564 - 15478.979: 98.5825% ( 1) 00:12:23.354 15478.979 - 15541.394: 98.5905% ( 1) 00:12:23.354 15541.394 - 15603.810: 98.6147% ( 3) 00:12:23.354 15603.810 - 15666.225: 98.6227% ( 1) 00:12:23.354 15666.225 - 15728.640: 98.6389% ( 2) 00:12:23.354 15728.640 - 15791.055: 98.6469% ( 1) 00:12:23.354 15791.055 - 15853.470: 98.6630% ( 2) 00:12:23.355 15853.470 - 15915.886: 98.6791% ( 2) 00:12:23.355 15915.886 - 15978.301: 98.6872% ( 1) 00:12:23.355 15978.301 - 16103.131: 98.7194% ( 4) 00:12:23.355 16103.131 - 16227.962: 98.7436% ( 3) 00:12:23.355 16227.962 - 16352.792: 98.7677% ( 3) 00:12:23.355 16352.792 - 16477.623: 98.7999% ( 4) 00:12:23.355 16477.623 - 16602.453: 98.8322% ( 4) 00:12:23.355 16602.453 - 16727.284: 98.8563% ( 3) 00:12:23.355 16727.284 - 16852.114: 98.8805% ( 3) 00:12:23.355 16852.114 - 16976.945: 98.9127% ( 4) 00:12:23.355 16976.945 - 17101.775: 98.9369% ( 3) 00:12:23.355 17101.775 - 17226.606: 98.9691% ( 4) 00:12:23.355 42941.684 - 43191.345: 98.9771% ( 1) 00:12:23.355 43191.345 - 43441.006: 99.0174% ( 5) 00:12:23.355 43441.006 - 43690.667: 99.0496% ( 4) 00:12:23.355 43690.667 - 43940.328: 99.0899% ( 5) 00:12:23.355 43940.328 - 44189.989: 99.1302% ( 5) 00:12:23.355 44189.989 - 44439.650: 99.1624% ( 4) 00:12:23.355 44439.650 - 44689.310: 99.2026% ( 5) 00:12:23.355 44689.310 - 44938.971: 99.2268% ( 3) 00:12:23.355 44938.971 - 45188.632: 99.2832% ( 7) 00:12:23.355 45188.632 - 45438.293: 99.3154% ( 4) 00:12:23.355 45438.293 - 45687.954: 99.3637% ( 6) 00:12:23.355 45687.954 - 45937.615: 99.3959% ( 4) 00:12:23.355 45937.615 - 46187.276: 99.4362% ( 5) 00:12:23.355 46187.276 - 46436.937: 99.4684% ( 4) 00:12:23.355 46436.937 - 46686.598: 99.4845% ( 2) 00:12:23.355 52428.800 - 52678.461: 99.5087% ( 3) 00:12:23.355 52678.461 - 52928.122: 99.5490% ( 5) 00:12:23.355 52928.122 - 53177.783: 99.5892% ( 5) 00:12:23.355 53177.783 - 53427.444: 99.6215% ( 4) 00:12:23.355 53427.444 - 53677.105: 99.6456% ( 3) 00:12:23.355 53677.105 - 53926.766: 99.6778% ( 4) 00:12:23.355 53926.766 - 54176.427: 99.7101% ( 4) 00:12:23.355 54176.427 - 54426.088: 99.7423% ( 4) 00:12:23.355 54426.088 - 54675.749: 99.7664% ( 3) 00:12:23.355 54675.749 - 54925.410: 99.7986% ( 4) 00:12:23.355 54925.410 - 55175.070: 99.8309% ( 4) 00:12:23.355 55175.070 - 55424.731: 99.8631% ( 4) 00:12:23.355 55424.731 - 55674.392: 99.8872% ( 3) 00:12:23.355 55674.392 - 55924.053: 99.9195% ( 4) 00:12:23.355 55924.053 - 56173.714: 99.9517% ( 4) 00:12:23.355 56173.714 - 56423.375: 99.9839% ( 4) 00:12:23.355 56423.375 - 56673.036: 100.0000% ( 2) 00:12:23.355 00:12:23.355 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:23.355 ============================================================================== 00:12:23.355 Range in us Cumulative IO count 00:12:23.355 8176.396 - 8238.811: 0.0081% ( 1) 00:12:23.355 8238.811 - 8301.227: 0.0966% ( 11) 00:12:23.355 8301.227 - 8363.642: 0.2577% ( 20) 00:12:23.355 8363.642 - 8426.057: 0.5316% ( 34) 00:12:23.355 8426.057 - 8488.472: 1.0470% ( 64) 00:12:23.355 8488.472 - 8550.888: 1.7236% ( 84) 00:12:23.355 8550.888 - 8613.303: 2.5693% ( 105) 00:12:23.355 8613.303 - 8675.718: 3.6324% ( 132) 00:12:23.355 8675.718 - 8738.133: 4.9694% ( 166) 00:12:23.355 8738.133 - 8800.549: 6.6447% ( 208) 00:12:23.355 8800.549 - 8862.964: 8.5132% ( 232) 00:12:23.355 8862.964 - 8925.379: 10.5992% ( 259) 00:12:23.355 8925.379 - 8987.794: 12.8624% ( 281) 00:12:23.355 8987.794 - 9050.210: 15.3753% ( 312) 00:12:23.355 9050.210 - 9112.625: 17.9446% ( 319) 00:12:23.355 9112.625 - 9175.040: 20.5944% ( 329) 00:12:23.355 9175.040 - 9237.455: 23.3731% ( 345) 00:12:23.355 9237.455 - 9299.870: 26.1759% ( 348) 00:12:23.355 9299.870 - 9362.286: 29.2284% ( 379) 00:12:23.355 9362.286 - 9424.701: 32.3454% ( 387) 00:12:23.355 9424.701 - 9487.116: 35.5509% ( 398) 00:12:23.355 9487.116 - 9549.531: 38.8692% ( 412) 00:12:23.355 9549.531 - 9611.947: 42.1714% ( 410) 00:12:23.355 9611.947 - 9674.362: 45.4253% ( 404) 00:12:23.355 9674.362 - 9736.777: 48.5986% ( 394) 00:12:23.355 9736.777 - 9799.192: 51.6511% ( 379) 00:12:23.355 9799.192 - 9861.608: 54.5747% ( 363) 00:12:23.355 9861.608 - 9924.023: 57.3937% ( 350) 00:12:23.355 9924.023 - 9986.438: 60.0596% ( 331) 00:12:23.355 9986.438 - 10048.853: 62.5725% ( 312) 00:12:23.355 10048.853 - 10111.269: 64.9807% ( 299) 00:12:23.355 10111.269 - 10173.684: 67.0828% ( 261) 00:12:23.355 10173.684 - 10236.099: 69.1608% ( 258) 00:12:23.355 10236.099 - 10298.514: 71.1340% ( 245) 00:12:23.355 10298.514 - 10360.930: 73.0267% ( 235) 00:12:23.355 10360.930 - 10423.345: 74.8792% ( 230) 00:12:23.355 10423.345 - 10485.760: 76.6108% ( 215) 00:12:23.355 10485.760 - 10548.175: 78.2619% ( 205) 00:12:23.355 10548.175 - 10610.590: 79.6875% ( 177) 00:12:23.355 10610.590 - 10673.006: 81.0970% ( 175) 00:12:23.355 10673.006 - 10735.421: 82.3131% ( 151) 00:12:23.355 10735.421 - 10797.836: 83.4246% ( 138) 00:12:23.355 10797.836 - 10860.251: 84.4475% ( 127) 00:12:23.355 10860.251 - 10922.667: 85.3898% ( 117) 00:12:23.355 10922.667 - 10985.082: 86.2999% ( 113) 00:12:23.355 10985.082 - 11047.497: 87.1376% ( 104) 00:12:23.355 11047.497 - 11109.912: 87.8947% ( 94) 00:12:23.355 11109.912 - 11172.328: 88.5229% ( 78) 00:12:23.355 11172.328 - 11234.743: 89.1914% ( 83) 00:12:23.355 11234.743 - 11297.158: 89.7632% ( 71) 00:12:23.355 11297.158 - 11359.573: 90.3270% ( 70) 00:12:23.355 11359.573 - 11421.989: 90.8505% ( 65) 00:12:23.355 11421.989 - 11484.404: 91.3257% ( 59) 00:12:23.355 11484.404 - 11546.819: 91.7687% ( 55) 00:12:23.355 11546.819 - 11609.234: 92.2197% ( 56) 00:12:23.355 11609.234 - 11671.650: 92.6466% ( 53) 00:12:23.355 11671.650 - 11734.065: 93.0735% ( 53) 00:12:23.355 11734.065 - 11796.480: 93.4198% ( 43) 00:12:23.355 11796.480 - 11858.895: 93.7419% ( 40) 00:12:23.355 11858.895 - 11921.310: 94.0399% ( 37) 00:12:23.355 11921.310 - 11983.726: 94.3138% ( 34) 00:12:23.355 11983.726 - 12046.141: 94.5796% ( 33) 00:12:23.355 12046.141 - 12108.556: 94.8695% ( 36) 00:12:23.355 12108.556 - 12170.971: 95.1514% ( 35) 00:12:23.355 12170.971 - 12233.387: 95.3769% ( 28) 00:12:23.355 12233.387 - 12295.802: 95.6105% ( 29) 00:12:23.355 12295.802 - 12358.217: 95.8521% ( 30) 00:12:23.355 12358.217 - 12420.632: 96.0615% ( 26) 00:12:23.355 12420.632 - 12483.048: 96.2548% ( 24) 00:12:23.355 12483.048 - 12545.463: 96.4803% ( 28) 00:12:23.355 12545.463 - 12607.878: 96.6656% ( 23) 00:12:23.355 12607.878 - 12670.293: 96.8025% ( 17) 00:12:23.355 12670.293 - 12732.709: 96.9153% ( 14) 00:12:23.355 12732.709 - 12795.124: 97.0280% ( 14) 00:12:23.355 12795.124 - 12857.539: 97.1166% ( 11) 00:12:23.355 12857.539 - 12919.954: 97.2052% ( 11) 00:12:23.355 12919.954 - 12982.370: 97.2938% ( 11) 00:12:23.355 12982.370 - 13044.785: 97.3744% ( 10) 00:12:23.355 13044.785 - 13107.200: 97.4630% ( 11) 00:12:23.355 13107.200 - 13169.615: 97.5354% ( 9) 00:12:23.355 13169.615 - 13232.030: 97.6160% ( 10) 00:12:23.355 13232.030 - 13294.446: 97.7046% ( 11) 00:12:23.355 13294.446 - 13356.861: 97.7610% ( 7) 00:12:23.355 13356.861 - 13419.276: 97.8254% ( 8) 00:12:23.355 13419.276 - 13481.691: 97.8979% ( 9) 00:12:23.355 13481.691 - 13544.107: 97.9462% ( 6) 00:12:23.355 13544.107 - 13606.522: 98.0026% ( 7) 00:12:23.355 13606.522 - 13668.937: 98.0670% ( 8) 00:12:23.355 13668.937 - 13731.352: 98.1234% ( 7) 00:12:23.355 13731.352 - 13793.768: 98.1959% ( 9) 00:12:23.355 13793.768 - 13856.183: 98.2442% ( 6) 00:12:23.355 13856.183 - 13918.598: 98.2845% ( 5) 00:12:23.355 13918.598 - 13981.013: 98.3006% ( 2) 00:12:23.355 13981.013 - 14043.429: 98.3167% ( 2) 00:12:23.355 14043.429 - 14105.844: 98.3247% ( 1) 00:12:23.355 14105.844 - 14168.259: 98.3409% ( 2) 00:12:23.355 14168.259 - 14230.674: 98.3570% ( 2) 00:12:23.355 14230.674 - 14293.090: 98.3731% ( 2) 00:12:23.355 14293.090 - 14355.505: 98.3811% ( 1) 00:12:23.355 14355.505 - 14417.920: 98.3972% ( 2) 00:12:23.355 14417.920 - 14480.335: 98.4133% ( 2) 00:12:23.355 14480.335 - 14542.750: 98.4294% ( 2) 00:12:23.355 14542.750 - 14605.166: 98.4456% ( 2) 00:12:23.355 14605.166 - 14667.581: 98.4536% ( 1) 00:12:23.355 15728.640 - 15791.055: 98.4617% ( 1) 00:12:23.355 15791.055 - 15853.470: 98.4778% ( 2) 00:12:23.355 15853.470 - 15915.886: 98.4858% ( 1) 00:12:23.355 15915.886 - 15978.301: 98.5019% ( 2) 00:12:23.355 15978.301 - 16103.131: 98.5261% ( 3) 00:12:23.355 16103.131 - 16227.962: 98.5583% ( 4) 00:12:23.355 16227.962 - 16352.792: 98.5905% ( 4) 00:12:23.355 16352.792 - 16477.623: 98.6147% ( 3) 00:12:23.355 16477.623 - 16602.453: 98.6469% ( 4) 00:12:23.356 16602.453 - 16727.284: 98.6711% ( 3) 00:12:23.356 16727.284 - 16852.114: 98.7033% ( 4) 00:12:23.356 16852.114 - 16976.945: 98.7274% ( 3) 00:12:23.356 16976.945 - 17101.775: 98.7597% ( 4) 00:12:23.356 17101.775 - 17226.606: 98.7838% ( 3) 00:12:23.356 17226.606 - 17351.436: 98.8160% ( 4) 00:12:23.356 17351.436 - 17476.267: 98.8402% ( 3) 00:12:23.356 17476.267 - 17601.097: 98.8724% ( 4) 00:12:23.356 17601.097 - 17725.928: 98.9046% ( 4) 00:12:23.356 17725.928 - 17850.758: 98.9288% ( 3) 00:12:23.356 17850.758 - 17975.589: 98.9610% ( 4) 00:12:23.356 17975.589 - 18100.419: 98.9691% ( 1) 00:12:23.356 40195.413 - 40445.074: 99.0013% ( 4) 00:12:23.356 40445.074 - 40694.735: 99.0496% ( 6) 00:12:23.356 40694.735 - 40944.396: 99.0818% ( 4) 00:12:23.356 40944.396 - 41194.057: 99.1140% ( 4) 00:12:23.356 41194.057 - 41443.718: 99.1543% ( 5) 00:12:23.356 41443.718 - 41693.379: 99.1946% ( 5) 00:12:23.356 41693.379 - 41943.040: 99.2268% ( 4) 00:12:23.356 41943.040 - 42192.701: 99.2751% ( 6) 00:12:23.356 42192.701 - 42442.362: 99.3154% ( 5) 00:12:23.356 42442.362 - 42692.023: 99.3557% ( 5) 00:12:23.356 42692.023 - 42941.684: 99.3959% ( 5) 00:12:23.356 42941.684 - 43191.345: 99.4443% ( 6) 00:12:23.356 43191.345 - 43441.006: 99.4845% ( 5) 00:12:23.356 49183.208 - 49432.869: 99.5087% ( 3) 00:12:23.356 49432.869 - 49682.530: 99.5570% ( 6) 00:12:23.356 49682.530 - 49932.190: 99.5892% ( 4) 00:12:23.356 49932.190 - 50181.851: 99.6215% ( 4) 00:12:23.356 50181.851 - 50431.512: 99.6617% ( 5) 00:12:23.356 50431.512 - 50681.173: 99.7020% ( 5) 00:12:23.356 50681.173 - 50930.834: 99.7342% ( 4) 00:12:23.356 50930.834 - 51180.495: 99.7745% ( 5) 00:12:23.356 51180.495 - 51430.156: 99.8148% ( 5) 00:12:23.356 51430.156 - 51679.817: 99.8470% ( 4) 00:12:23.356 51679.817 - 51929.478: 99.8872% ( 5) 00:12:23.356 51929.478 - 52179.139: 99.9275% ( 5) 00:12:23.356 52179.139 - 52428.800: 99.9678% ( 5) 00:12:23.356 52428.800 - 52678.461: 100.0000% ( 4) 00:12:23.356 00:12:23.356 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:23.356 ============================================================================== 00:12:23.356 Range in us Cumulative IO count 00:12:23.356 8238.811 - 8301.227: 0.0564% ( 7) 00:12:23.356 8301.227 - 8363.642: 0.2255% ( 21) 00:12:23.356 8363.642 - 8426.057: 0.4349% ( 26) 00:12:23.356 8426.057 - 8488.472: 0.8698% ( 54) 00:12:23.356 8488.472 - 8550.888: 1.5222% ( 81) 00:12:23.356 8550.888 - 8613.303: 2.4565% ( 116) 00:12:23.356 8613.303 - 8675.718: 3.5035% ( 130) 00:12:23.356 8675.718 - 8738.133: 4.9533% ( 180) 00:12:23.356 8738.133 - 8800.549: 6.7171% ( 219) 00:12:23.356 8800.549 - 8862.964: 8.6018% ( 234) 00:12:23.356 8862.964 - 8925.379: 10.6717% ( 257) 00:12:23.356 8925.379 - 8987.794: 12.8624% ( 272) 00:12:23.356 8987.794 - 9050.210: 15.3351% ( 307) 00:12:23.356 9050.210 - 9112.625: 18.0815% ( 341) 00:12:23.356 9112.625 - 9175.040: 20.7877% ( 336) 00:12:23.356 9175.040 - 9237.455: 23.4778% ( 334) 00:12:23.356 9237.455 - 9299.870: 26.3773% ( 360) 00:12:23.356 9299.870 - 9362.286: 29.3412% ( 368) 00:12:23.356 9362.286 - 9424.701: 32.5709% ( 401) 00:12:23.356 9424.701 - 9487.116: 35.9133% ( 415) 00:12:23.356 9487.116 - 9549.531: 39.2155% ( 410) 00:12:23.356 9549.531 - 9611.947: 42.6466% ( 426) 00:12:23.356 9611.947 - 9674.362: 45.8521% ( 398) 00:12:23.356 9674.362 - 9736.777: 49.0013% ( 391) 00:12:23.356 9736.777 - 9799.192: 52.0296% ( 376) 00:12:23.356 9799.192 - 9861.608: 54.9291% ( 360) 00:12:23.356 9861.608 - 9924.023: 57.6434% ( 337) 00:12:23.356 9924.023 - 9986.438: 60.3495% ( 336) 00:12:23.356 9986.438 - 10048.853: 62.6852% ( 290) 00:12:23.356 10048.853 - 10111.269: 64.9404% ( 280) 00:12:23.356 10111.269 - 10173.684: 67.1070% ( 269) 00:12:23.356 10173.684 - 10236.099: 69.2332% ( 264) 00:12:23.356 10236.099 - 10298.514: 71.1662% ( 240) 00:12:23.356 10298.514 - 10360.930: 73.0187% ( 230) 00:12:23.356 10360.930 - 10423.345: 74.7664% ( 217) 00:12:23.356 10423.345 - 10485.760: 76.4014% ( 203) 00:12:23.356 10485.760 - 10548.175: 77.9800% ( 196) 00:12:23.356 10548.175 - 10610.590: 79.3976% ( 176) 00:12:23.356 10610.590 - 10673.006: 80.6540% ( 156) 00:12:23.356 10673.006 - 10735.421: 81.9427% ( 160) 00:12:23.356 10735.421 - 10797.836: 83.1749% ( 153) 00:12:23.356 10797.836 - 10860.251: 84.2139% ( 129) 00:12:23.356 10860.251 - 10922.667: 85.2207% ( 125) 00:12:23.356 10922.667 - 10985.082: 86.1630% ( 117) 00:12:23.356 10985.082 - 11047.497: 86.9282% ( 95) 00:12:23.356 11047.497 - 11109.912: 87.6530% ( 90) 00:12:23.356 11109.912 - 11172.328: 88.3779% ( 90) 00:12:23.356 11172.328 - 11234.743: 89.0867% ( 88) 00:12:23.356 11234.743 - 11297.158: 89.7229% ( 79) 00:12:23.356 11297.158 - 11359.573: 90.3109% ( 73) 00:12:23.356 11359.573 - 11421.989: 90.7700% ( 57) 00:12:23.356 11421.989 - 11484.404: 91.2613% ( 61) 00:12:23.356 11484.404 - 11546.819: 91.7606% ( 62) 00:12:23.356 11546.819 - 11609.234: 92.2278% ( 58) 00:12:23.356 11609.234 - 11671.650: 92.6466% ( 52) 00:12:23.356 11671.650 - 11734.065: 93.0171% ( 46) 00:12:23.356 11734.065 - 11796.480: 93.3473% ( 41) 00:12:23.356 11796.480 - 11858.895: 93.6775% ( 41) 00:12:23.356 11858.895 - 11921.310: 93.9111% ( 29) 00:12:23.356 11921.310 - 11983.726: 94.1769% ( 33) 00:12:23.356 11983.726 - 12046.141: 94.4265% ( 31) 00:12:23.356 12046.141 - 12108.556: 94.6601% ( 29) 00:12:23.356 12108.556 - 12170.971: 94.9017% ( 30) 00:12:23.356 12170.971 - 12233.387: 95.1031% ( 25) 00:12:23.356 12233.387 - 12295.802: 95.3367% ( 29) 00:12:23.356 12295.802 - 12358.217: 95.5783% ( 30) 00:12:23.356 12358.217 - 12420.632: 95.8038% ( 28) 00:12:23.356 12420.632 - 12483.048: 95.9890% ( 23) 00:12:23.356 12483.048 - 12545.463: 96.2065% ( 27) 00:12:23.356 12545.463 - 12607.878: 96.3918% ( 23) 00:12:23.356 12607.878 - 12670.293: 96.5689% ( 22) 00:12:23.356 12670.293 - 12732.709: 96.7300% ( 20) 00:12:23.356 12732.709 - 12795.124: 96.8589% ( 16) 00:12:23.356 12795.124 - 12857.539: 96.9797% ( 15) 00:12:23.356 12857.539 - 12919.954: 97.1086% ( 16) 00:12:23.356 12919.954 - 12982.370: 97.2213% ( 14) 00:12:23.356 12982.370 - 13044.785: 97.3421% ( 15) 00:12:23.356 13044.785 - 13107.200: 97.4468% ( 13) 00:12:23.356 13107.200 - 13169.615: 97.5435% ( 12) 00:12:23.356 13169.615 - 13232.030: 97.6240% ( 10) 00:12:23.356 13232.030 - 13294.446: 97.7126% ( 11) 00:12:23.356 13294.446 - 13356.861: 97.8012% ( 11) 00:12:23.356 13356.861 - 13419.276: 97.8898% ( 11) 00:12:23.356 13419.276 - 13481.691: 97.9704% ( 10) 00:12:23.356 13481.691 - 13544.107: 98.0187% ( 6) 00:12:23.356 13544.107 - 13606.522: 98.0912% ( 9) 00:12:23.356 13606.522 - 13668.937: 98.1476% ( 7) 00:12:23.356 13668.937 - 13731.352: 98.2120% ( 8) 00:12:23.356 13731.352 - 13793.768: 98.2603% ( 6) 00:12:23.356 13793.768 - 13856.183: 98.3006% ( 5) 00:12:23.356 13856.183 - 13918.598: 98.3167% ( 2) 00:12:23.356 13918.598 - 13981.013: 98.3328% ( 2) 00:12:23.356 13981.013 - 14043.429: 98.3489% ( 2) 00:12:23.356 14043.429 - 14105.844: 98.3570% ( 1) 00:12:23.356 14105.844 - 14168.259: 98.3731% ( 2) 00:12:23.356 14168.259 - 14230.674: 98.3892% ( 2) 00:12:23.356 14230.674 - 14293.090: 98.4053% ( 2) 00:12:23.356 14293.090 - 14355.505: 98.4294% ( 3) 00:12:23.356 14355.505 - 14417.920: 98.4375% ( 1) 00:12:23.356 14417.920 - 14480.335: 98.4536% ( 2) 00:12:23.356 15853.470 - 15915.886: 98.4697% ( 2) 00:12:23.356 15915.886 - 15978.301: 98.4858% ( 2) 00:12:23.356 15978.301 - 16103.131: 98.5100% ( 3) 00:12:23.356 16103.131 - 16227.962: 98.5422% ( 4) 00:12:23.356 16227.962 - 16352.792: 98.5744% ( 4) 00:12:23.356 16352.792 - 16477.623: 98.6147% ( 5) 00:12:23.356 16477.623 - 16602.453: 98.6469% ( 4) 00:12:23.356 16602.453 - 16727.284: 98.6711% ( 3) 00:12:23.356 16727.284 - 16852.114: 98.7033% ( 4) 00:12:23.356 16852.114 - 16976.945: 98.7274% ( 3) 00:12:23.356 16976.945 - 17101.775: 98.7516% ( 3) 00:12:23.356 17101.775 - 17226.606: 98.7838% ( 4) 00:12:23.356 17226.606 - 17351.436: 98.8160% ( 4) 00:12:23.356 17351.436 - 17476.267: 98.8402% ( 3) 00:12:23.356 17476.267 - 17601.097: 98.8724% ( 4) 00:12:23.356 17601.097 - 17725.928: 98.9046% ( 4) 00:12:23.356 17725.928 - 17850.758: 98.9369% ( 4) 00:12:23.356 17850.758 - 17975.589: 98.9610% ( 3) 00:12:23.356 17975.589 - 18100.419: 98.9691% ( 1) 00:12:23.357 36949.821 - 37199.482: 98.9932% ( 3) 00:12:23.357 37199.482 - 37449.143: 99.0335% ( 5) 00:12:23.357 37449.143 - 37698.804: 99.0657% ( 4) 00:12:23.357 37698.804 - 37948.465: 99.1060% ( 5) 00:12:23.357 37948.465 - 38198.126: 99.1463% ( 5) 00:12:23.357 38198.126 - 38447.787: 99.1865% ( 5) 00:12:23.357 38447.787 - 38697.448: 99.2268% ( 5) 00:12:23.357 38697.448 - 38947.109: 99.2671% ( 5) 00:12:23.357 38947.109 - 39196.770: 99.3073% ( 5) 00:12:23.357 39196.770 - 39446.430: 99.3396% ( 4) 00:12:23.357 39446.430 - 39696.091: 99.3798% ( 5) 00:12:23.357 39696.091 - 39945.752: 99.4201% ( 5) 00:12:23.357 39945.752 - 40195.413: 99.4604% ( 5) 00:12:23.357 40195.413 - 40445.074: 99.4845% ( 3) 00:12:23.357 46436.937 - 46686.598: 99.5168% ( 4) 00:12:23.357 46686.598 - 46936.259: 99.5570% ( 5) 00:12:23.357 46936.259 - 47185.920: 99.5973% ( 5) 00:12:23.357 47185.920 - 47435.581: 99.6376% ( 5) 00:12:23.357 47435.581 - 47685.242: 99.6778% ( 5) 00:12:23.357 47685.242 - 47934.903: 99.7181% ( 5) 00:12:23.357 47934.903 - 48184.564: 99.7584% ( 5) 00:12:23.357 48184.564 - 48434.225: 99.8067% ( 6) 00:12:23.357 48434.225 - 48683.886: 99.8470% ( 5) 00:12:23.357 48683.886 - 48933.547: 99.8792% ( 4) 00:12:23.357 48933.547 - 49183.208: 99.9275% ( 6) 00:12:23.357 49183.208 - 49432.869: 99.9678% ( 5) 00:12:23.357 49432.869 - 49682.530: 100.0000% ( 4) 00:12:23.357 00:12:23.357 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:23.357 ============================================================================== 00:12:23.357 Range in us Cumulative IO count 00:12:23.357 8176.396 - 8238.811: 0.0322% ( 4) 00:12:23.357 8238.811 - 8301.227: 0.1047% ( 9) 00:12:23.357 8301.227 - 8363.642: 0.2658% ( 20) 00:12:23.357 8363.642 - 8426.057: 0.4994% ( 29) 00:12:23.357 8426.057 - 8488.472: 0.9907% ( 61) 00:12:23.357 8488.472 - 8550.888: 1.7075% ( 89) 00:12:23.357 8550.888 - 8613.303: 2.5209% ( 101) 00:12:23.357 8613.303 - 8675.718: 3.6405% ( 139) 00:12:23.357 8675.718 - 8738.133: 4.9774% ( 166) 00:12:23.357 8738.133 - 8800.549: 6.8138% ( 228) 00:12:23.357 8800.549 - 8862.964: 8.7629% ( 242) 00:12:23.357 8862.964 - 8925.379: 10.9053% ( 266) 00:12:23.357 8925.379 - 8987.794: 13.0880% ( 271) 00:12:23.357 8987.794 - 9050.210: 15.4559% ( 294) 00:12:23.357 9050.210 - 9112.625: 18.1218% ( 331) 00:12:23.357 9112.625 - 9175.040: 20.8521% ( 339) 00:12:23.357 9175.040 - 9237.455: 23.6469% ( 347) 00:12:23.357 9237.455 - 9299.870: 26.6028% ( 367) 00:12:23.357 9299.870 - 9362.286: 29.5747% ( 369) 00:12:23.357 9362.286 - 9424.701: 32.6917% ( 387) 00:12:23.357 9424.701 - 9487.116: 36.0825% ( 421) 00:12:23.357 9487.116 - 9549.531: 39.2880% ( 398) 00:12:23.357 9549.531 - 9611.947: 42.6466% ( 417) 00:12:23.357 9611.947 - 9674.362: 45.9005% ( 404) 00:12:23.357 9674.362 - 9736.777: 49.0496% ( 391) 00:12:23.357 9736.777 - 9799.192: 52.0296% ( 370) 00:12:23.357 9799.192 - 9861.608: 54.9452% ( 362) 00:12:23.357 9861.608 - 9924.023: 57.6434% ( 335) 00:12:23.357 9924.023 - 9986.438: 60.1079% ( 306) 00:12:23.357 9986.438 - 10048.853: 62.5161% ( 299) 00:12:23.357 10048.853 - 10111.269: 64.9323% ( 300) 00:12:23.357 10111.269 - 10173.684: 67.2197% ( 284) 00:12:23.357 10173.684 - 10236.099: 69.2252% ( 249) 00:12:23.357 10236.099 - 10298.514: 71.1904% ( 244) 00:12:23.357 10298.514 - 10360.930: 73.0106% ( 226) 00:12:23.357 10360.930 - 10423.345: 74.7745% ( 219) 00:12:23.357 10423.345 - 10485.760: 76.4014% ( 202) 00:12:23.357 10485.760 - 10548.175: 77.8753% ( 183) 00:12:23.357 10548.175 - 10610.590: 79.3090% ( 178) 00:12:23.357 10610.590 - 10673.006: 80.5976% ( 160) 00:12:23.357 10673.006 - 10735.421: 81.8460% ( 155) 00:12:23.357 10735.421 - 10797.836: 83.1105% ( 157) 00:12:23.357 10797.836 - 10860.251: 84.3186% ( 150) 00:12:23.357 10860.251 - 10922.667: 85.4543% ( 141) 00:12:23.357 10922.667 - 10985.082: 86.4852% ( 128) 00:12:23.357 10985.082 - 11047.497: 87.3631% ( 109) 00:12:23.357 11047.497 - 11109.912: 88.0799% ( 89) 00:12:23.357 11109.912 - 11172.328: 88.7242% ( 80) 00:12:23.357 11172.328 - 11234.743: 89.3283% ( 75) 00:12:23.357 11234.743 - 11297.158: 89.8840% ( 69) 00:12:23.357 11297.158 - 11359.573: 90.3431% ( 57) 00:12:23.357 11359.573 - 11421.989: 90.8102% ( 58) 00:12:23.357 11421.989 - 11484.404: 91.2693% ( 57) 00:12:23.357 11484.404 - 11546.819: 91.7043% ( 54) 00:12:23.357 11546.819 - 11609.234: 92.0828% ( 47) 00:12:23.357 11609.234 - 11671.650: 92.5177% ( 54) 00:12:23.357 11671.650 - 11734.065: 92.8802% ( 45) 00:12:23.357 11734.065 - 11796.480: 93.2265% ( 43) 00:12:23.357 11796.480 - 11858.895: 93.5325% ( 38) 00:12:23.357 11858.895 - 11921.310: 93.8466% ( 39) 00:12:23.357 11921.310 - 11983.726: 94.1044% ( 32) 00:12:23.357 11983.726 - 12046.141: 94.3943% ( 36) 00:12:23.357 12046.141 - 12108.556: 94.6762% ( 35) 00:12:23.357 12108.556 - 12170.971: 94.9420% ( 33) 00:12:23.357 12170.971 - 12233.387: 95.2159% ( 34) 00:12:23.357 12233.387 - 12295.802: 95.4172% ( 25) 00:12:23.357 12295.802 - 12358.217: 95.6347% ( 27) 00:12:23.357 12358.217 - 12420.632: 95.8119% ( 22) 00:12:23.357 12420.632 - 12483.048: 95.9810% ( 21) 00:12:23.357 12483.048 - 12545.463: 96.1421% ( 20) 00:12:23.357 12545.463 - 12607.878: 96.2951% ( 19) 00:12:23.357 12607.878 - 12670.293: 96.4159% ( 15) 00:12:23.357 12670.293 - 12732.709: 96.5367% ( 15) 00:12:23.357 12732.709 - 12795.124: 96.6575% ( 15) 00:12:23.357 12795.124 - 12857.539: 96.7542% ( 12) 00:12:23.357 12857.539 - 12919.954: 96.8669% ( 14) 00:12:23.357 12919.954 - 12982.370: 96.9636% ( 12) 00:12:23.357 12982.370 - 13044.785: 97.0764% ( 14) 00:12:23.357 13044.785 - 13107.200: 97.1891% ( 14) 00:12:23.357 13107.200 - 13169.615: 97.2858% ( 12) 00:12:23.357 13169.615 - 13232.030: 97.3905% ( 13) 00:12:23.357 13232.030 - 13294.446: 97.4630% ( 9) 00:12:23.357 13294.446 - 13356.861: 97.5515% ( 11) 00:12:23.357 13356.861 - 13419.276: 97.6401% ( 11) 00:12:23.357 13419.276 - 13481.691: 97.7207% ( 10) 00:12:23.357 13481.691 - 13544.107: 97.8012% ( 10) 00:12:23.357 13544.107 - 13606.522: 97.8657% ( 8) 00:12:23.357 13606.522 - 13668.937: 97.9301% ( 8) 00:12:23.357 13668.937 - 13731.352: 97.9704% ( 5) 00:12:23.357 13731.352 - 13793.768: 98.0106% ( 5) 00:12:23.357 13793.768 - 13856.183: 98.0590% ( 6) 00:12:23.357 13856.183 - 13918.598: 98.0912% ( 4) 00:12:23.357 13918.598 - 13981.013: 98.1395% ( 6) 00:12:23.357 13981.013 - 14043.429: 98.1798% ( 5) 00:12:23.357 14043.429 - 14105.844: 98.2200% ( 5) 00:12:23.357 14105.844 - 14168.259: 98.2281% ( 1) 00:12:23.357 14168.259 - 14230.674: 98.2523% ( 3) 00:12:23.357 14230.674 - 14293.090: 98.2603% ( 1) 00:12:23.357 14293.090 - 14355.505: 98.2684% ( 1) 00:12:23.357 14355.505 - 14417.920: 98.2845% ( 2) 00:12:23.357 14417.920 - 14480.335: 98.3006% ( 2) 00:12:23.357 14480.335 - 14542.750: 98.3167% ( 2) 00:12:23.357 14542.750 - 14605.166: 98.3328% ( 2) 00:12:23.357 14605.166 - 14667.581: 98.3489% ( 2) 00:12:23.357 14667.581 - 14729.996: 98.3570% ( 1) 00:12:23.357 14729.996 - 14792.411: 98.3650% ( 1) 00:12:23.357 14792.411 - 14854.827: 98.3811% ( 2) 00:12:23.357 14854.827 - 14917.242: 98.3972% ( 2) 00:12:23.357 14917.242 - 14979.657: 98.4214% ( 3) 00:12:23.357 14979.657 - 15042.072: 98.4294% ( 1) 00:12:23.357 15042.072 - 15104.488: 98.4456% ( 2) 00:12:23.357 15104.488 - 15166.903: 98.4536% ( 1) 00:12:23.357 16352.792 - 16477.623: 98.4617% ( 1) 00:12:23.357 16477.623 - 16602.453: 98.4939% ( 4) 00:12:23.357 16602.453 - 16727.284: 98.5261% ( 4) 00:12:23.357 16727.284 - 16852.114: 98.5503% ( 3) 00:12:23.357 16852.114 - 16976.945: 98.5825% ( 4) 00:12:23.357 16976.945 - 17101.775: 98.6147% ( 4) 00:12:23.357 17101.775 - 17226.606: 98.6469% ( 4) 00:12:23.357 17226.606 - 17351.436: 98.6791% ( 4) 00:12:23.357 17351.436 - 17476.267: 98.7113% ( 4) 00:12:23.357 17476.267 - 17601.097: 98.7436% ( 4) 00:12:23.357 17601.097 - 17725.928: 98.7758% ( 4) 00:12:23.357 17725.928 - 17850.758: 98.7999% ( 3) 00:12:23.357 17850.758 - 17975.589: 98.8322% ( 4) 00:12:23.357 17975.589 - 18100.419: 98.8644% ( 4) 00:12:23.357 18100.419 - 18225.250: 98.8885% ( 3) 00:12:23.357 18225.250 - 18350.080: 98.9288% ( 5) 00:12:23.357 18350.080 - 18474.910: 98.9530% ( 3) 00:12:23.357 18474.910 - 18599.741: 98.9691% ( 2) 00:12:23.357 33953.890 - 34203.550: 98.9932% ( 3) 00:12:23.357 34203.550 - 34453.211: 99.0416% ( 6) 00:12:23.357 34453.211 - 34702.872: 99.0818% ( 5) 00:12:23.357 34702.872 - 34952.533: 99.1221% ( 5) 00:12:23.357 34952.533 - 35202.194: 99.1704% ( 6) 00:12:23.357 35202.194 - 35451.855: 99.2107% ( 5) 00:12:23.358 35451.855 - 35701.516: 99.2510% ( 5) 00:12:23.358 35701.516 - 35951.177: 99.2993% ( 6) 00:12:23.358 35951.177 - 36200.838: 99.3476% ( 6) 00:12:23.358 36200.838 - 36450.499: 99.3879% ( 5) 00:12:23.358 36450.499 - 36700.160: 99.4362% ( 6) 00:12:23.358 36700.160 - 36949.821: 99.4845% ( 6) 00:12:23.358 43191.345 - 43441.006: 99.5248% ( 5) 00:12:23.358 43441.006 - 43690.667: 99.5651% ( 5) 00:12:23.358 43690.667 - 43940.328: 99.6053% ( 5) 00:12:23.358 43940.328 - 44189.989: 99.6537% ( 6) 00:12:23.358 44189.989 - 44439.650: 99.6939% ( 5) 00:12:23.358 44439.650 - 44689.310: 99.7342% ( 5) 00:12:23.358 44689.310 - 44938.971: 99.7745% ( 5) 00:12:23.358 44938.971 - 45188.632: 99.8228% ( 6) 00:12:23.358 45188.632 - 45438.293: 99.8631% ( 5) 00:12:23.358 45438.293 - 45687.954: 99.9114% ( 6) 00:12:23.358 45687.954 - 45937.615: 99.9517% ( 5) 00:12:23.358 45937.615 - 46187.276: 99.9919% ( 5) 00:12:23.358 46187.276 - 46436.937: 100.0000% ( 1) 00:12:23.358 00:12:23.358 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:23.358 ============================================================================== 00:12:23.358 Range in us Cumulative IO count 00:12:23.358 8176.396 - 8238.811: 0.0322% ( 4) 00:12:23.358 8238.811 - 8301.227: 0.1208% ( 11) 00:12:23.358 8301.227 - 8363.642: 0.2658% ( 18) 00:12:23.358 8363.642 - 8426.057: 0.5880% ( 40) 00:12:23.358 8426.057 - 8488.472: 1.0954% ( 63) 00:12:23.358 8488.472 - 8550.888: 1.7155% ( 77) 00:12:23.358 8550.888 - 8613.303: 2.5612% ( 105) 00:12:23.358 8613.303 - 8675.718: 3.6244% ( 132) 00:12:23.358 8675.718 - 8738.133: 4.9533% ( 165) 00:12:23.358 8738.133 - 8800.549: 6.6688% ( 213) 00:12:23.358 8800.549 - 8862.964: 8.7548% ( 259) 00:12:23.358 8862.964 - 8925.379: 10.9939% ( 278) 00:12:23.358 8925.379 - 8987.794: 13.4182% ( 301) 00:12:23.358 8987.794 - 9050.210: 16.0116% ( 322) 00:12:23.358 9050.210 - 9112.625: 18.6775% ( 331) 00:12:23.358 9112.625 - 9175.040: 21.3998% ( 338) 00:12:23.358 9175.040 - 9237.455: 24.1624% ( 343) 00:12:23.358 9237.455 - 9299.870: 27.0296% ( 356) 00:12:23.358 9299.870 - 9362.286: 30.0822% ( 379) 00:12:23.358 9362.286 - 9424.701: 33.2474% ( 393) 00:12:23.358 9424.701 - 9487.116: 36.5496% ( 410) 00:12:23.358 9487.116 - 9549.531: 39.8679% ( 412) 00:12:23.358 9549.531 - 9611.947: 43.2265% ( 417) 00:12:23.358 9611.947 - 9674.362: 46.3354% ( 386) 00:12:23.358 9674.362 - 9736.777: 49.4523% ( 387) 00:12:23.358 9736.777 - 9799.192: 52.4565% ( 373) 00:12:23.358 9799.192 - 9861.608: 55.4124% ( 367) 00:12:23.358 9861.608 - 9924.023: 58.1669% ( 342) 00:12:23.358 9924.023 - 9986.438: 60.7039% ( 315) 00:12:23.358 9986.438 - 10048.853: 63.1685% ( 306) 00:12:23.358 10048.853 - 10111.269: 65.2867% ( 263) 00:12:23.358 10111.269 - 10173.684: 67.4291% ( 266) 00:12:23.358 10173.684 - 10236.099: 69.3138% ( 234) 00:12:23.358 10236.099 - 10298.514: 71.2307% ( 238) 00:12:23.358 10298.514 - 10360.930: 73.0509% ( 226) 00:12:23.358 10360.930 - 10423.345: 74.7181% ( 207) 00:12:23.358 10423.345 - 10485.760: 76.3692% ( 205) 00:12:23.358 10485.760 - 10548.175: 77.8673% ( 186) 00:12:23.358 10548.175 - 10610.590: 79.2445% ( 171) 00:12:23.358 10610.590 - 10673.006: 80.5332% ( 160) 00:12:23.358 10673.006 - 10735.421: 81.7494% ( 151) 00:12:23.358 10735.421 - 10797.836: 82.9253% ( 146) 00:12:23.358 10797.836 - 10860.251: 84.0367% ( 138) 00:12:23.358 10860.251 - 10922.667: 85.1321% ( 136) 00:12:23.358 10922.667 - 10985.082: 86.0744% ( 117) 00:12:23.358 10985.082 - 11047.497: 86.9040% ( 103) 00:12:23.358 11047.497 - 11109.912: 87.6611% ( 94) 00:12:23.358 11109.912 - 11172.328: 88.2974% ( 79) 00:12:23.358 11172.328 - 11234.743: 88.9175% ( 77) 00:12:23.358 11234.743 - 11297.158: 89.4491% ( 66) 00:12:23.358 11297.158 - 11359.573: 89.9726% ( 65) 00:12:23.358 11359.573 - 11421.989: 90.4075% ( 54) 00:12:23.358 11421.989 - 11484.404: 90.8344% ( 53) 00:12:23.358 11484.404 - 11546.819: 91.2532% ( 52) 00:12:23.358 11546.819 - 11609.234: 91.6881% ( 54) 00:12:23.358 11609.234 - 11671.650: 92.0828% ( 49) 00:12:23.358 11671.650 - 11734.065: 92.4855% ( 50) 00:12:23.358 11734.065 - 11796.480: 92.9043% ( 52) 00:12:23.358 11796.480 - 11858.895: 93.2506% ( 43) 00:12:23.358 11858.895 - 11921.310: 93.5970% ( 43) 00:12:23.358 11921.310 - 11983.726: 93.9352% ( 42) 00:12:23.358 11983.726 - 12046.141: 94.2574% ( 40) 00:12:23.358 12046.141 - 12108.556: 94.6037% ( 43) 00:12:23.358 12108.556 - 12170.971: 94.8615% ( 32) 00:12:23.358 12170.971 - 12233.387: 95.0950% ( 29) 00:12:23.358 12233.387 - 12295.802: 95.3206% ( 28) 00:12:23.358 12295.802 - 12358.217: 95.5300% ( 26) 00:12:23.358 12358.217 - 12420.632: 95.7474% ( 27) 00:12:23.358 12420.632 - 12483.048: 95.9488% ( 25) 00:12:23.358 12483.048 - 12545.463: 96.1260% ( 22) 00:12:23.358 12545.463 - 12607.878: 96.3032% ( 22) 00:12:23.358 12607.878 - 12670.293: 96.4803% ( 22) 00:12:23.358 12670.293 - 12732.709: 96.6253% ( 18) 00:12:23.358 12732.709 - 12795.124: 96.7461% ( 15) 00:12:23.358 12795.124 - 12857.539: 96.8750% ( 16) 00:12:23.358 12857.539 - 12919.954: 96.9797% ( 13) 00:12:23.358 12919.954 - 12982.370: 97.0683% ( 11) 00:12:23.358 12982.370 - 13044.785: 97.1488% ( 10) 00:12:23.358 13044.785 - 13107.200: 97.2455% ( 12) 00:12:23.358 13107.200 - 13169.615: 97.3421% ( 12) 00:12:23.358 13169.615 - 13232.030: 97.4388% ( 12) 00:12:23.358 13232.030 - 13294.446: 97.5113% ( 9) 00:12:23.358 13294.446 - 13356.861: 97.5677% ( 7) 00:12:23.358 13356.861 - 13419.276: 97.6079% ( 5) 00:12:23.358 13419.276 - 13481.691: 97.6321% ( 3) 00:12:23.358 13481.691 - 13544.107: 97.6562% ( 3) 00:12:23.358 13544.107 - 13606.522: 97.6965% ( 5) 00:12:23.358 13606.522 - 13668.937: 97.7368% ( 5) 00:12:23.358 13668.937 - 13731.352: 97.7771% ( 5) 00:12:23.358 13731.352 - 13793.768: 97.8173% ( 5) 00:12:23.358 13793.768 - 13856.183: 97.8657% ( 6) 00:12:23.358 13856.183 - 13918.598: 97.9059% ( 5) 00:12:23.358 13918.598 - 13981.013: 97.9462% ( 5) 00:12:23.358 13981.013 - 14043.429: 97.9945% ( 6) 00:12:23.358 14043.429 - 14105.844: 98.0267% ( 4) 00:12:23.358 14105.844 - 14168.259: 98.0751% ( 6) 00:12:23.358 14168.259 - 14230.674: 98.0992% ( 3) 00:12:23.358 14230.674 - 14293.090: 98.1153% ( 2) 00:12:23.358 14293.090 - 14355.505: 98.1314% ( 2) 00:12:23.358 14355.505 - 14417.920: 98.1476% ( 2) 00:12:23.358 14417.920 - 14480.335: 98.1637% ( 2) 00:12:23.358 14480.335 - 14542.750: 98.1878% ( 3) 00:12:23.358 14542.750 - 14605.166: 98.1959% ( 1) 00:12:23.358 14605.166 - 14667.581: 98.2120% ( 2) 00:12:23.358 14667.581 - 14729.996: 98.2281% ( 2) 00:12:23.358 14729.996 - 14792.411: 98.2361% ( 1) 00:12:23.358 14792.411 - 14854.827: 98.2523% ( 2) 00:12:23.358 14854.827 - 14917.242: 98.2684% ( 2) 00:12:23.358 14917.242 - 14979.657: 98.2845% ( 2) 00:12:23.358 14979.657 - 15042.072: 98.3006% ( 2) 00:12:23.358 15042.072 - 15104.488: 98.3086% ( 1) 00:12:23.358 15104.488 - 15166.903: 98.3247% ( 2) 00:12:23.358 15166.903 - 15229.318: 98.3409% ( 2) 00:12:23.358 15229.318 - 15291.733: 98.3489% ( 1) 00:12:23.358 15291.733 - 15354.149: 98.3650% ( 2) 00:12:23.358 15354.149 - 15416.564: 98.3811% ( 2) 00:12:23.358 15416.564 - 15478.979: 98.3892% ( 1) 00:12:23.358 15478.979 - 15541.394: 98.4133% ( 3) 00:12:23.358 15541.394 - 15603.810: 98.4214% ( 1) 00:12:23.358 15603.810 - 15666.225: 98.4375% ( 2) 00:12:23.358 15666.225 - 15728.640: 98.4456% ( 1) 00:12:23.358 15728.640 - 15791.055: 98.4536% ( 1) 00:12:23.358 17101.775 - 17226.606: 98.4778% ( 3) 00:12:23.358 17226.606 - 17351.436: 98.5100% ( 4) 00:12:23.358 17351.436 - 17476.267: 98.5503% ( 5) 00:12:23.358 17476.267 - 17601.097: 98.5744% ( 3) 00:12:23.358 17601.097 - 17725.928: 98.5986% ( 3) 00:12:23.358 17725.928 - 17850.758: 98.6308% ( 4) 00:12:23.358 17850.758 - 17975.589: 98.6630% ( 4) 00:12:23.358 17975.589 - 18100.419: 98.6872% ( 3) 00:12:23.358 18100.419 - 18225.250: 98.7194% ( 4) 00:12:23.359 18225.250 - 18350.080: 98.7516% ( 4) 00:12:23.359 18350.080 - 18474.910: 98.7838% ( 4) 00:12:23.359 18474.910 - 18599.741: 98.8160% ( 4) 00:12:23.359 18599.741 - 18724.571: 98.8402% ( 3) 00:12:23.359 18724.571 - 18849.402: 98.8724% ( 4) 00:12:23.359 18849.402 - 18974.232: 98.9046% ( 4) 00:12:23.359 18974.232 - 19099.063: 98.9288% ( 3) 00:12:23.359 19099.063 - 19223.893: 98.9610% ( 4) 00:12:23.359 19223.893 - 19348.724: 98.9691% ( 1) 00:12:23.359 30708.297 - 30833.128: 98.9852% ( 2) 00:12:23.359 30833.128 - 30957.958: 99.0093% ( 3) 00:12:23.359 30957.958 - 31082.789: 99.0255% ( 2) 00:12:23.359 31082.789 - 31207.619: 99.0416% ( 2) 00:12:23.359 31207.619 - 31332.450: 99.0657% ( 3) 00:12:23.359 31332.450 - 31457.280: 99.0818% ( 2) 00:12:23.359 31457.280 - 31582.110: 99.0979% ( 2) 00:12:23.359 31582.110 - 31706.941: 99.1221% ( 3) 00:12:23.359 31706.941 - 31831.771: 99.1463% ( 3) 00:12:23.359 31831.771 - 31956.602: 99.1704% ( 3) 00:12:23.359 31956.602 - 32206.263: 99.2107% ( 5) 00:12:23.359 32206.263 - 32455.924: 99.2590% ( 6) 00:12:23.359 32455.924 - 32705.585: 99.2993% ( 5) 00:12:23.359 32705.585 - 32955.246: 99.3396% ( 5) 00:12:23.359 32955.246 - 33204.907: 99.3718% ( 4) 00:12:23.359 33204.907 - 33454.568: 99.4120% ( 5) 00:12:23.359 33454.568 - 33704.229: 99.4443% ( 4) 00:12:23.359 33704.229 - 33953.890: 99.4845% ( 5) 00:12:23.359 40195.413 - 40445.074: 99.5248% ( 5) 00:12:23.359 40445.074 - 40694.735: 99.5651% ( 5) 00:12:23.359 40694.735 - 40944.396: 99.5973% ( 4) 00:12:23.359 40944.396 - 41194.057: 99.6376% ( 5) 00:12:23.359 41194.057 - 41443.718: 99.6859% ( 6) 00:12:23.359 41443.718 - 41693.379: 99.7181% ( 4) 00:12:23.359 41693.379 - 41943.040: 99.7664% ( 6) 00:12:23.359 41943.040 - 42192.701: 99.8067% ( 5) 00:12:23.359 42192.701 - 42442.362: 99.8389% ( 4) 00:12:23.359 42442.362 - 42692.023: 99.8872% ( 6) 00:12:23.359 42692.023 - 42941.684: 99.9275% ( 5) 00:12:23.359 42941.684 - 43191.345: 99.9758% ( 6) 00:12:23.359 43191.345 - 43441.006: 100.0000% ( 3) 00:12:23.359 00:12:23.359 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:23.359 ============================================================================== 00:12:23.359 Range in us Cumulative IO count 00:12:23.359 8238.811 - 8301.227: 0.1208% ( 15) 00:12:23.359 8301.227 - 8363.642: 0.2577% ( 17) 00:12:23.359 8363.642 - 8426.057: 0.6041% ( 43) 00:12:23.359 8426.057 - 8488.472: 1.0712% ( 58) 00:12:23.359 8488.472 - 8550.888: 1.7880% ( 89) 00:12:23.359 8550.888 - 8613.303: 2.6981% ( 113) 00:12:23.359 8613.303 - 8675.718: 3.8177% ( 139) 00:12:23.359 8675.718 - 8738.133: 5.1949% ( 171) 00:12:23.359 8738.133 - 8800.549: 6.7655% ( 195) 00:12:23.359 8800.549 - 8862.964: 8.7226% ( 243) 00:12:23.359 8862.964 - 8925.379: 10.9697% ( 279) 00:12:23.359 8925.379 - 8987.794: 13.3215% ( 292) 00:12:23.359 8987.794 - 9050.210: 15.8425% ( 313) 00:12:23.359 9050.210 - 9112.625: 18.5003% ( 330) 00:12:23.359 9112.625 - 9175.040: 21.1179% ( 325) 00:12:23.359 9175.040 - 9237.455: 23.8966% ( 345) 00:12:23.359 9237.455 - 9299.870: 26.8363% ( 365) 00:12:23.359 9299.870 - 9362.286: 29.8405% ( 373) 00:12:23.359 9362.286 - 9424.701: 32.9011% ( 380) 00:12:23.359 9424.701 - 9487.116: 36.2194% ( 412) 00:12:23.359 9487.116 - 9549.531: 39.3847% ( 393) 00:12:23.359 9549.531 - 9611.947: 42.7110% ( 413) 00:12:23.359 9611.947 - 9674.362: 45.9246% ( 399) 00:12:23.359 9674.362 - 9736.777: 49.1704% ( 403) 00:12:23.359 9736.777 - 9799.192: 52.3921% ( 400) 00:12:23.359 9799.192 - 9861.608: 55.3318% ( 365) 00:12:23.359 9861.608 - 9924.023: 58.1266% ( 347) 00:12:23.359 9924.023 - 9986.438: 60.8167% ( 334) 00:12:23.359 9986.438 - 10048.853: 63.2088% ( 297) 00:12:23.359 10048.853 - 10111.269: 65.3109% ( 261) 00:12:23.359 10111.269 - 10173.684: 67.4613% ( 267) 00:12:23.359 10173.684 - 10236.099: 69.4427% ( 246) 00:12:23.359 10236.099 - 10298.514: 71.2709% ( 227) 00:12:23.359 10298.514 - 10360.930: 73.0992% ( 227) 00:12:23.359 10360.930 - 10423.345: 74.8872% ( 222) 00:12:23.359 10423.345 - 10485.760: 76.4820% ( 198) 00:12:23.359 10485.760 - 10548.175: 77.9720% ( 185) 00:12:23.359 10548.175 - 10610.590: 79.3976% ( 177) 00:12:23.359 10610.590 - 10673.006: 80.7748% ( 171) 00:12:23.359 10673.006 - 10735.421: 81.9427% ( 145) 00:12:23.359 10735.421 - 10797.836: 83.1105% ( 145) 00:12:23.359 10797.836 - 10860.251: 84.1898% ( 134) 00:12:23.359 10860.251 - 10922.667: 85.2448% ( 131) 00:12:23.359 10922.667 - 10985.082: 86.1872% ( 117) 00:12:23.359 10985.082 - 11047.497: 86.9604% ( 96) 00:12:23.359 11047.497 - 11109.912: 87.6530% ( 86) 00:12:23.359 11109.912 - 11172.328: 88.2812% ( 78) 00:12:23.359 11172.328 - 11234.743: 88.8853% ( 75) 00:12:23.359 11234.743 - 11297.158: 89.4169% ( 66) 00:12:23.359 11297.158 - 11359.573: 89.9404% ( 65) 00:12:23.359 11359.573 - 11421.989: 90.4720% ( 66) 00:12:23.359 11421.989 - 11484.404: 91.0197% ( 68) 00:12:23.359 11484.404 - 11546.819: 91.4868% ( 58) 00:12:23.359 11546.819 - 11609.234: 91.8573% ( 46) 00:12:23.359 11609.234 - 11671.650: 92.2036% ( 43) 00:12:23.359 11671.650 - 11734.065: 92.5580% ( 44) 00:12:23.359 11734.065 - 11796.480: 92.9285% ( 46) 00:12:23.359 11796.480 - 11858.895: 93.3553% ( 53) 00:12:23.359 11858.895 - 11921.310: 93.6936% ( 42) 00:12:23.359 11921.310 - 11983.726: 94.0077% ( 39) 00:12:23.359 11983.726 - 12046.141: 94.3621% ( 44) 00:12:23.359 12046.141 - 12108.556: 94.6923% ( 41) 00:12:23.359 12108.556 - 12170.971: 94.9581% ( 33) 00:12:23.359 12170.971 - 12233.387: 95.1675% ( 26) 00:12:23.359 12233.387 - 12295.802: 95.4011% ( 29) 00:12:23.359 12295.802 - 12358.217: 95.6186% ( 27) 00:12:23.359 12358.217 - 12420.632: 95.8199% ( 25) 00:12:23.359 12420.632 - 12483.048: 96.0213% ( 25) 00:12:23.359 12483.048 - 12545.463: 96.1823% ( 20) 00:12:23.359 12545.463 - 12607.878: 96.3193% ( 17) 00:12:23.359 12607.878 - 12670.293: 96.4723% ( 19) 00:12:23.359 12670.293 - 12732.709: 96.6334% ( 20) 00:12:23.359 12732.709 - 12795.124: 96.7864% ( 19) 00:12:23.359 12795.124 - 12857.539: 96.9314% ( 18) 00:12:23.359 12857.539 - 12919.954: 97.1005% ( 21) 00:12:23.359 12919.954 - 12982.370: 97.2374% ( 17) 00:12:23.359 12982.370 - 13044.785: 97.3824% ( 18) 00:12:23.359 13044.785 - 13107.200: 97.4710% ( 11) 00:12:23.359 13107.200 - 13169.615: 97.5435% ( 9) 00:12:23.359 13169.615 - 13232.030: 97.5838% ( 5) 00:12:23.359 13232.030 - 13294.446: 97.6079% ( 3) 00:12:23.359 13294.446 - 13356.861: 97.6321% ( 3) 00:12:23.359 13356.861 - 13419.276: 97.6643% ( 4) 00:12:23.360 13419.276 - 13481.691: 97.6885% ( 3) 00:12:23.360 13481.691 - 13544.107: 97.7207% ( 4) 00:12:23.360 13544.107 - 13606.522: 97.7448% ( 3) 00:12:23.360 13606.522 - 13668.937: 97.7690% ( 3) 00:12:23.360 13668.937 - 13731.352: 97.7932% ( 3) 00:12:23.360 13731.352 - 13793.768: 97.8173% ( 3) 00:12:23.360 13793.768 - 13856.183: 97.8495% ( 4) 00:12:23.360 13856.183 - 13918.598: 97.8737% ( 3) 00:12:23.360 13918.598 - 13981.013: 97.8979% ( 3) 00:12:23.360 13981.013 - 14043.429: 97.9301% ( 4) 00:12:23.360 14043.429 - 14105.844: 97.9381% ( 1) 00:12:23.360 14230.674 - 14293.090: 97.9462% ( 1) 00:12:23.360 14293.090 - 14355.505: 97.9623% ( 2) 00:12:23.360 14355.505 - 14417.920: 97.9704% ( 1) 00:12:23.360 14417.920 - 14480.335: 97.9865% ( 2) 00:12:23.360 14480.335 - 14542.750: 98.0026% ( 2) 00:12:23.360 14542.750 - 14605.166: 98.0106% ( 1) 00:12:23.360 14605.166 - 14667.581: 98.0348% ( 3) 00:12:23.360 14667.581 - 14729.996: 98.0428% ( 1) 00:12:23.360 14729.996 - 14792.411: 98.0509% ( 1) 00:12:23.360 14792.411 - 14854.827: 98.0670% ( 2) 00:12:23.360 14854.827 - 14917.242: 98.0831% ( 2) 00:12:23.360 14917.242 - 14979.657: 98.0992% ( 2) 00:12:23.360 14979.657 - 15042.072: 98.1153% ( 2) 00:12:23.360 15042.072 - 15104.488: 98.1314% ( 2) 00:12:23.360 15104.488 - 15166.903: 98.1476% ( 2) 00:12:23.360 15166.903 - 15229.318: 98.1637% ( 2) 00:12:23.360 15229.318 - 15291.733: 98.1798% ( 2) 00:12:23.360 15291.733 - 15354.149: 98.1878% ( 1) 00:12:23.360 15354.149 - 15416.564: 98.2039% ( 2) 00:12:23.360 15416.564 - 15478.979: 98.2120% ( 1) 00:12:23.360 15478.979 - 15541.394: 98.2281% ( 2) 00:12:23.360 15541.394 - 15603.810: 98.2442% ( 2) 00:12:23.360 15603.810 - 15666.225: 98.2603% ( 2) 00:12:23.360 15666.225 - 15728.640: 98.2684% ( 1) 00:12:23.360 15728.640 - 15791.055: 98.2845% ( 2) 00:12:23.360 15791.055 - 15853.470: 98.3006% ( 2) 00:12:23.360 15853.470 - 15915.886: 98.3167% ( 2) 00:12:23.360 15915.886 - 15978.301: 98.3328% ( 2) 00:12:23.360 15978.301 - 16103.131: 98.3650% ( 4) 00:12:23.360 16103.131 - 16227.962: 98.3892% ( 3) 00:12:23.360 16227.962 - 16352.792: 98.4214% ( 4) 00:12:23.360 16352.792 - 16477.623: 98.4456% ( 3) 00:12:23.360 16477.623 - 16602.453: 98.4536% ( 1) 00:12:23.360 17725.928 - 17850.758: 98.4697% ( 2) 00:12:23.360 17850.758 - 17975.589: 98.5019% ( 4) 00:12:23.360 17975.589 - 18100.419: 98.5341% ( 4) 00:12:23.360 18100.419 - 18225.250: 98.5664% ( 4) 00:12:23.360 18225.250 - 18350.080: 98.5986% ( 4) 00:12:23.360 18350.080 - 18474.910: 98.6227% ( 3) 00:12:23.360 18474.910 - 18599.741: 98.6550% ( 4) 00:12:23.360 18599.741 - 18724.571: 98.6791% ( 3) 00:12:23.360 18724.571 - 18849.402: 98.7113% ( 4) 00:12:23.360 18849.402 - 18974.232: 98.7436% ( 4) 00:12:23.360 18974.232 - 19099.063: 98.7758% ( 4) 00:12:23.360 19099.063 - 19223.893: 98.8080% ( 4) 00:12:23.360 19223.893 - 19348.724: 98.8322% ( 3) 00:12:23.360 19348.724 - 19473.554: 98.8644% ( 4) 00:12:23.360 19473.554 - 19598.385: 98.9046% ( 5) 00:12:23.360 19598.385 - 19723.215: 98.9288% ( 3) 00:12:23.360 19723.215 - 19848.046: 98.9691% ( 5) 00:12:23.360 27587.535 - 27712.366: 98.9932% ( 3) 00:12:23.360 27712.366 - 27837.196: 99.0093% ( 2) 00:12:23.360 27837.196 - 27962.027: 99.0255% ( 2) 00:12:23.360 27962.027 - 28086.857: 99.0496% ( 3) 00:12:23.360 28086.857 - 28211.688: 99.0657% ( 2) 00:12:23.360 28211.688 - 28336.518: 99.0818% ( 2) 00:12:23.360 28336.518 - 28461.349: 99.1060% ( 3) 00:12:23.360 28461.349 - 28586.179: 99.1221% ( 2) 00:12:23.360 28586.179 - 28711.010: 99.1463% ( 3) 00:12:23.360 28711.010 - 28835.840: 99.1704% ( 3) 00:12:23.360 28835.840 - 28960.670: 99.1865% ( 2) 00:12:23.360 28960.670 - 29085.501: 99.2107% ( 3) 00:12:23.360 29085.501 - 29210.331: 99.2268% ( 2) 00:12:23.360 29210.331 - 29335.162: 99.2510% ( 3) 00:12:23.360 29335.162 - 29459.992: 99.2671% ( 2) 00:12:23.360 29459.992 - 29584.823: 99.2912% ( 3) 00:12:23.360 29584.823 - 29709.653: 99.3073% ( 2) 00:12:23.360 29709.653 - 29834.484: 99.3235% ( 2) 00:12:23.360 29834.484 - 29959.314: 99.3396% ( 2) 00:12:23.360 29959.314 - 30084.145: 99.3637% ( 3) 00:12:23.360 30084.145 - 30208.975: 99.3879% ( 3) 00:12:23.360 30208.975 - 30333.806: 99.4040% ( 2) 00:12:23.360 30333.806 - 30458.636: 99.4201% ( 2) 00:12:23.360 30458.636 - 30583.467: 99.4443% ( 3) 00:12:23.360 30583.467 - 30708.297: 99.4604% ( 2) 00:12:23.360 30708.297 - 30833.128: 99.4845% ( 3) 00:12:23.360 36949.821 - 37199.482: 99.5087% ( 3) 00:12:23.360 37199.482 - 37449.143: 99.5490% ( 5) 00:12:23.360 37449.143 - 37698.804: 99.5973% ( 6) 00:12:23.360 37698.804 - 37948.465: 99.6537% ( 7) 00:12:23.360 37948.465 - 38198.126: 99.6939% ( 5) 00:12:23.360 38198.126 - 38447.787: 99.7423% ( 6) 00:12:23.360 38447.787 - 38697.448: 99.7986% ( 7) 00:12:23.360 38697.448 - 38947.109: 99.8389% ( 5) 00:12:23.360 38947.109 - 39196.770: 99.8872% ( 6) 00:12:23.360 39196.770 - 39446.430: 99.9356% ( 6) 00:12:23.360 39446.430 - 39696.091: 99.9839% ( 6) 00:12:23.360 39696.091 - 39945.752: 100.0000% ( 2) 00:12:23.360 00:12:23.360 11:26:28 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:12:24.737 Initializing NVMe Controllers 00:12:24.737 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:24.737 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:24.737 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:24.737 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:24.737 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:24.737 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:24.737 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:24.737 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:24.737 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:24.737 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:24.737 Initialization complete. Launching workers. 00:12:24.737 ======================================================== 00:12:24.737 Latency(us) 00:12:24.737 Device Information : IOPS MiB/s Average min max 00:12:24.737 PCIE (0000:00:10.0) NSID 1 from core 0: 10170.32 119.18 12623.22 9242.56 53657.88 00:12:24.737 PCIE (0000:00:11.0) NSID 1 from core 0: 10170.32 119.18 12590.56 9213.91 50180.82 00:12:24.737 PCIE (0000:00:13.0) NSID 1 from core 0: 10170.32 119.18 12559.13 9391.70 47765.56 00:12:24.737 PCIE (0000:00:12.0) NSID 1 from core 0: 10170.32 119.18 12528.45 9327.54 44685.45 00:12:24.737 PCIE (0000:00:12.0) NSID 2 from core 0: 10170.32 119.18 12497.60 9283.37 41906.78 00:12:24.737 PCIE (0000:00:12.0) NSID 3 from core 0: 10234.29 119.93 12385.56 9234.77 32170.44 00:12:24.737 ======================================================== 00:12:24.737 Total : 61085.91 715.85 12530.60 9213.91 53657.88 00:12:24.737 00:12:24.737 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:24.738 ================================================================================= 00:12:24.738 1.00000% : 9611.947us 00:12:24.738 10.00000% : 10548.175us 00:12:24.738 25.00000% : 11297.158us 00:12:24.738 50.00000% : 12295.802us 00:12:24.738 75.00000% : 13107.200us 00:12:24.738 90.00000% : 13793.768us 00:12:24.738 95.00000% : 14230.674us 00:12:24.738 98.00000% : 14854.827us 00:12:24.738 99.00000% : 42692.023us 00:12:24.738 99.50000% : 51430.156us 00:12:24.738 99.90000% : 53177.783us 00:12:24.738 99.99000% : 53677.105us 00:12:24.738 99.99900% : 53677.105us 00:12:24.738 99.99990% : 53677.105us 00:12:24.738 99.99999% : 53677.105us 00:12:24.738 00:12:24.738 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:24.738 ================================================================================= 00:12:24.738 1.00000% : 9799.192us 00:12:24.738 10.00000% : 10673.006us 00:12:24.738 25.00000% : 11421.989us 00:12:24.738 50.00000% : 12295.802us 00:12:24.738 75.00000% : 13107.200us 00:12:24.738 90.00000% : 13731.352us 00:12:24.738 95.00000% : 14105.844us 00:12:24.738 98.00000% : 14792.411us 00:12:24.738 99.00000% : 39696.091us 00:12:24.738 99.50000% : 48184.564us 00:12:24.738 99.90000% : 49932.190us 00:12:24.738 99.99000% : 50181.851us 00:12:24.738 99.99900% : 50181.851us 00:12:24.738 99.99990% : 50181.851us 00:12:24.738 99.99999% : 50181.851us 00:12:24.738 00:12:24.738 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:24.738 ================================================================================= 00:12:24.738 1.00000% : 9861.608us 00:12:24.738 10.00000% : 10610.590us 00:12:24.738 25.00000% : 11359.573us 00:12:24.738 50.00000% : 12233.387us 00:12:24.738 75.00000% : 13107.200us 00:12:24.738 90.00000% : 13731.352us 00:12:24.738 95.00000% : 14168.259us 00:12:24.738 98.00000% : 15042.072us 00:12:24.738 99.00000% : 37449.143us 00:12:24.738 99.50000% : 45687.954us 00:12:24.738 99.90000% : 47435.581us 00:12:24.738 99.99000% : 47934.903us 00:12:24.738 99.99900% : 47934.903us 00:12:24.738 99.99990% : 47934.903us 00:12:24.738 99.99999% : 47934.903us 00:12:24.738 00:12:24.738 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:24.738 ================================================================================= 00:12:24.738 1.00000% : 9861.608us 00:12:24.738 10.00000% : 10610.590us 00:12:24.738 25.00000% : 11421.989us 00:12:24.738 50.00000% : 12233.387us 00:12:24.738 75.00000% : 13107.200us 00:12:24.738 90.00000% : 13731.352us 00:12:24.738 95.00000% : 14168.259us 00:12:24.738 98.00000% : 15104.488us 00:12:24.738 99.00000% : 34453.211us 00:12:24.738 99.50000% : 42692.023us 00:12:24.738 99.90000% : 44439.650us 00:12:24.738 99.99000% : 44689.310us 00:12:24.738 99.99900% : 44689.310us 00:12:24.738 99.99990% : 44689.310us 00:12:24.738 99.99999% : 44689.310us 00:12:24.738 00:12:24.738 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:24.738 ================================================================================= 00:12:24.738 1.00000% : 9799.192us 00:12:24.738 10.00000% : 10673.006us 00:12:24.738 25.00000% : 11359.573us 00:12:24.738 50.00000% : 12295.802us 00:12:24.738 75.00000% : 13107.200us 00:12:24.738 90.00000% : 13731.352us 00:12:24.738 95.00000% : 14168.259us 00:12:24.738 98.00000% : 15166.903us 00:12:24.738 99.00000% : 31332.450us 00:12:24.738 99.50000% : 39945.752us 00:12:24.738 99.90000% : 41693.379us 00:12:24.738 99.99000% : 41943.040us 00:12:24.738 99.99900% : 41943.040us 00:12:24.738 99.99990% : 41943.040us 00:12:24.738 99.99999% : 41943.040us 00:12:24.738 00:12:24.738 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:24.738 ================================================================================= 00:12:24.738 1.00000% : 9736.777us 00:12:24.738 10.00000% : 10673.006us 00:12:24.738 25.00000% : 11421.989us 00:12:24.738 50.00000% : 12295.802us 00:12:24.738 75.00000% : 13107.200us 00:12:24.738 90.00000% : 13731.352us 00:12:24.738 95.00000% : 14168.259us 00:12:24.738 98.00000% : 15042.072us 00:12:24.738 99.00000% : 21720.503us 00:12:24.738 99.50000% : 29959.314us 00:12:24.738 99.90000% : 31831.771us 00:12:24.738 99.99000% : 32206.263us 00:12:24.738 99.99900% : 32206.263us 00:12:24.738 99.99990% : 32206.263us 00:12:24.738 99.99999% : 32206.263us 00:12:24.738 00:12:24.738 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:24.738 ============================================================================== 00:12:24.738 Range in us Cumulative IO count 00:12:24.738 9237.455 - 9299.870: 0.0098% ( 1) 00:12:24.738 9299.870 - 9362.286: 0.0884% ( 8) 00:12:24.738 9362.286 - 9424.701: 0.3243% ( 24) 00:12:24.738 9424.701 - 9487.116: 0.4914% ( 17) 00:12:24.738 9487.116 - 9549.531: 0.7370% ( 25) 00:12:24.738 9549.531 - 9611.947: 1.0220% ( 29) 00:12:24.738 9611.947 - 9674.362: 1.3954% ( 38) 00:12:24.738 9674.362 - 9736.777: 1.6706% ( 28) 00:12:24.738 9736.777 - 9799.192: 1.9752% ( 31) 00:12:24.738 9799.192 - 9861.608: 2.1914% ( 22) 00:12:24.738 9861.608 - 9924.023: 2.5059% ( 32) 00:12:24.738 9924.023 - 9986.438: 3.1152% ( 62) 00:12:24.738 9986.438 - 10048.853: 3.4591% ( 35) 00:12:24.738 10048.853 - 10111.269: 4.1765% ( 73) 00:12:24.738 10111.269 - 10173.684: 4.9037% ( 74) 00:12:24.738 10173.684 - 10236.099: 5.6407% ( 75) 00:12:24.738 10236.099 - 10298.514: 6.2598% ( 63) 00:12:24.738 10298.514 - 10360.930: 7.1541% ( 91) 00:12:24.738 10360.930 - 10423.345: 7.9009% ( 76) 00:12:24.738 10423.345 - 10485.760: 9.2964% ( 142) 00:12:24.738 10485.760 - 10548.175: 10.1808% ( 90) 00:12:24.738 10548.175 - 10610.590: 10.9866% ( 82) 00:12:24.738 10610.590 - 10673.006: 11.7728% ( 80) 00:12:24.738 10673.006 - 10735.421: 12.7653% ( 101) 00:12:24.738 10735.421 - 10797.836: 14.0232% ( 128) 00:12:24.738 10797.836 - 10860.251: 15.1042% ( 110) 00:12:24.738 10860.251 - 10922.667: 16.4800% ( 140) 00:12:24.738 10922.667 - 10985.082: 17.6789% ( 122) 00:12:24.738 10985.082 - 11047.497: 18.7598% ( 110) 00:12:24.738 11047.497 - 11109.912: 20.5189% ( 179) 00:12:24.738 11109.912 - 11172.328: 22.0224% ( 153) 00:12:24.738 11172.328 - 11234.743: 23.6439% ( 165) 00:12:24.738 11234.743 - 11297.158: 25.1376% ( 152) 00:12:24.738 11297.158 - 11359.573: 26.4446% ( 133) 00:12:24.738 11359.573 - 11421.989: 27.9579% ( 154) 00:12:24.738 11421.989 - 11484.404: 29.7858% ( 186) 00:12:24.738 11484.404 - 11546.819: 31.4957% ( 174) 00:12:24.738 11546.819 - 11609.234: 32.8616% ( 139) 00:12:24.738 11609.234 - 11671.650: 34.3062% ( 147) 00:12:24.738 11671.650 - 11734.065: 35.7410% ( 146) 00:12:24.738 11734.065 - 11796.480: 37.1364% ( 142) 00:12:24.738 11796.480 - 11858.895: 38.8758% ( 177) 00:12:24.738 11858.895 - 11921.310: 40.4088% ( 156) 00:12:24.738 11921.310 - 11983.726: 41.8927% ( 151) 00:12:24.738 11983.726 - 12046.141: 43.6419% ( 178) 00:12:24.738 12046.141 - 12108.556: 45.3420% ( 173) 00:12:24.738 12108.556 - 12170.971: 47.2681% ( 196) 00:12:24.738 12170.971 - 12233.387: 49.2335% ( 200) 00:12:24.738 12233.387 - 12295.802: 51.1006% ( 190) 00:12:24.738 12295.802 - 12358.217: 52.9285% ( 186) 00:12:24.738 12358.217 - 12420.632: 54.8153% ( 192) 00:12:24.738 12420.632 - 12483.048: 56.7512% ( 197) 00:12:24.738 12483.048 - 12545.463: 58.8935% ( 218) 00:12:24.738 12545.463 - 12607.878: 60.8392% ( 198) 00:12:24.738 12607.878 - 12670.293: 62.9324% ( 213) 00:12:24.738 12670.293 - 12732.709: 64.7799% ( 188) 00:12:24.738 12732.709 - 12795.124: 66.5487% ( 180) 00:12:24.738 12795.124 - 12857.539: 68.2488% ( 173) 00:12:24.738 12857.539 - 12919.954: 69.8605% ( 164) 00:12:24.738 12919.954 - 12982.370: 71.5998% ( 177) 00:12:24.738 12982.370 - 13044.785: 73.4670% ( 190) 00:12:24.738 13044.785 - 13107.200: 75.3636% ( 193) 00:12:24.738 13107.200 - 13169.615: 77.1423% ( 181) 00:12:24.738 13169.615 - 13232.030: 78.6753% ( 156) 00:12:24.738 13232.030 - 13294.446: 80.3557% ( 171) 00:12:24.738 13294.446 - 13356.861: 82.0067% ( 168) 00:12:24.738 13356.861 - 13419.276: 83.7068% ( 173) 00:12:24.738 13419.276 - 13481.691: 85.0727% ( 139) 00:12:24.738 13481.691 - 13544.107: 86.2127% ( 116) 00:12:24.738 13544.107 - 13606.522: 87.1757% ( 98) 00:12:24.738 13606.522 - 13668.937: 88.2960% ( 114) 00:12:24.738 13668.937 - 13731.352: 89.4851% ( 121) 00:12:24.738 13731.352 - 13793.768: 90.5071% ( 104) 00:12:24.738 13793.768 - 13856.183: 91.3719% ( 88) 00:12:24.738 13856.183 - 13918.598: 92.1875% ( 83) 00:12:24.738 13918.598 - 13981.013: 92.8656% ( 69) 00:12:24.738 13981.013 - 14043.429: 93.5043% ( 65) 00:12:24.738 14043.429 - 14105.844: 94.1038% ( 61) 00:12:24.738 14105.844 - 14168.259: 94.6737% ( 58) 00:12:24.738 14168.259 - 14230.674: 95.1553% ( 49) 00:12:24.738 14230.674 - 14293.090: 95.6859% ( 54) 00:12:24.738 14293.090 - 14355.505: 96.0987% ( 42) 00:12:24.738 14355.505 - 14417.920: 96.3836% ( 29) 00:12:24.738 14417.920 - 14480.335: 96.8259% ( 45) 00:12:24.738 14480.335 - 14542.750: 97.1993% ( 38) 00:12:24.738 14542.750 - 14605.166: 97.3958% ( 20) 00:12:24.738 14605.166 - 14667.581: 97.6317% ( 24) 00:12:24.738 14667.581 - 14729.996: 97.7889% ( 16) 00:12:24.738 14729.996 - 14792.411: 97.9658% ( 18) 00:12:24.738 14792.411 - 14854.827: 98.0444% ( 8) 00:12:24.738 14854.827 - 14917.242: 98.1623% ( 12) 00:12:24.738 14917.242 - 14979.657: 98.2704% ( 11) 00:12:24.738 14979.657 - 15042.072: 98.3196% ( 5) 00:12:24.738 15042.072 - 15104.488: 98.3687% ( 5) 00:12:24.739 15104.488 - 15166.903: 98.3982% ( 3) 00:12:24.739 15166.903 - 15229.318: 98.4473% ( 5) 00:12:24.739 15229.318 - 15291.733: 98.4965% ( 5) 00:12:24.739 15291.733 - 15354.149: 98.5456% ( 5) 00:12:24.739 15354.149 - 15416.564: 98.5751% ( 3) 00:12:24.739 15416.564 - 15478.979: 98.6144% ( 4) 00:12:24.739 15478.979 - 15541.394: 98.6537% ( 4) 00:12:24.739 15541.394 - 15603.810: 98.6832% ( 3) 00:12:24.739 15603.810 - 15666.225: 98.7028% ( 2) 00:12:24.739 15666.225 - 15728.640: 98.7225% ( 2) 00:12:24.739 15728.640 - 15791.055: 98.7421% ( 2) 00:12:24.739 41443.718 - 41693.379: 98.7913% ( 5) 00:12:24.739 41693.379 - 41943.040: 98.8404% ( 5) 00:12:24.739 41943.040 - 42192.701: 98.8895% ( 5) 00:12:24.739 42192.701 - 42442.362: 98.9485% ( 6) 00:12:24.739 42442.362 - 42692.023: 99.0075% ( 6) 00:12:24.739 42692.023 - 42941.684: 99.0566% ( 5) 00:12:24.739 42941.684 - 43191.345: 99.1057% ( 5) 00:12:24.739 43191.345 - 43441.006: 99.1647% ( 6) 00:12:24.739 43441.006 - 43690.667: 99.2237% ( 6) 00:12:24.739 43690.667 - 43940.328: 99.2826% ( 6) 00:12:24.739 43940.328 - 44189.989: 99.3318% ( 5) 00:12:24.739 44189.989 - 44439.650: 99.3711% ( 4) 00:12:24.739 50431.512 - 50681.173: 99.3809% ( 1) 00:12:24.739 50681.173 - 50930.834: 99.4300% ( 5) 00:12:24.739 50930.834 - 51180.495: 99.4890% ( 6) 00:12:24.739 51180.495 - 51430.156: 99.5283% ( 4) 00:12:24.739 51430.156 - 51679.817: 99.5774% ( 5) 00:12:24.739 51679.817 - 51929.478: 99.6364% ( 6) 00:12:24.739 51929.478 - 52179.139: 99.6855% ( 5) 00:12:24.739 52179.139 - 52428.800: 99.7445% ( 6) 00:12:24.739 52428.800 - 52678.461: 99.7936% ( 5) 00:12:24.739 52678.461 - 52928.122: 99.8526% ( 6) 00:12:24.739 52928.122 - 53177.783: 99.9017% ( 5) 00:12:24.739 53177.783 - 53427.444: 99.9509% ( 5) 00:12:24.739 53427.444 - 53677.105: 100.0000% ( 5) 00:12:24.739 00:12:24.739 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:24.739 ============================================================================== 00:12:24.739 Range in us Cumulative IO count 00:12:24.739 9175.040 - 9237.455: 0.0098% ( 1) 00:12:24.739 9362.286 - 9424.701: 0.0197% ( 1) 00:12:24.739 9424.701 - 9487.116: 0.0590% ( 4) 00:12:24.739 9487.116 - 9549.531: 0.1769% ( 12) 00:12:24.739 9549.531 - 9611.947: 0.2850% ( 11) 00:12:24.739 9611.947 - 9674.362: 0.4226% ( 14) 00:12:24.739 9674.362 - 9736.777: 0.6486% ( 23) 00:12:24.739 9736.777 - 9799.192: 1.0122% ( 37) 00:12:24.739 9799.192 - 9861.608: 1.4937% ( 49) 00:12:24.739 9861.608 - 9924.023: 2.0342% ( 55) 00:12:24.739 9924.023 - 9986.438: 2.4469% ( 42) 00:12:24.739 9986.438 - 10048.853: 2.9776% ( 54) 00:12:24.739 10048.853 - 10111.269: 3.3314% ( 36) 00:12:24.739 10111.269 - 10173.684: 3.7834% ( 46) 00:12:24.739 10173.684 - 10236.099: 4.3239% ( 55) 00:12:24.739 10236.099 - 10298.514: 5.2771% ( 97) 00:12:24.739 10298.514 - 10360.930: 6.0436% ( 78) 00:12:24.739 10360.930 - 10423.345: 6.8396% ( 81) 00:12:24.739 10423.345 - 10485.760: 7.6946% ( 87) 00:12:24.739 10485.760 - 10548.175: 8.7461% ( 107) 00:12:24.739 10548.175 - 10610.590: 9.9253% ( 120) 00:12:24.739 10610.590 - 10673.006: 10.9277% ( 102) 00:12:24.739 10673.006 - 10735.421: 12.0086% ( 110) 00:12:24.739 10735.421 - 10797.836: 13.2370% ( 125) 00:12:24.739 10797.836 - 10860.251: 14.3180% ( 110) 00:12:24.739 10860.251 - 10922.667: 15.3105% ( 101) 00:12:24.739 10922.667 - 10985.082: 16.6077% ( 132) 00:12:24.739 10985.082 - 11047.497: 17.8852% ( 130) 00:12:24.739 11047.497 - 11109.912: 18.9760% ( 111) 00:12:24.739 11109.912 - 11172.328: 20.2142% ( 126) 00:12:24.739 11172.328 - 11234.743: 21.5998% ( 141) 00:12:24.739 11234.743 - 11297.158: 23.0444% ( 147) 00:12:24.739 11297.158 - 11359.573: 24.7347% ( 172) 00:12:24.739 11359.573 - 11421.989: 26.3660% ( 166) 00:12:24.739 11421.989 - 11484.404: 28.1840% ( 185) 00:12:24.739 11484.404 - 11546.819: 30.1002% ( 195) 00:12:24.739 11546.819 - 11609.234: 31.9379% ( 187) 00:12:24.739 11609.234 - 11671.650: 33.6183% ( 171) 00:12:24.739 11671.650 - 11734.065: 35.2496% ( 166) 00:12:24.739 11734.065 - 11796.480: 36.9006% ( 168) 00:12:24.739 11796.480 - 11858.895: 38.7382% ( 187) 00:12:24.739 11858.895 - 11921.310: 40.6152% ( 191) 00:12:24.739 11921.310 - 11983.726: 42.1875% ( 160) 00:12:24.739 11983.726 - 12046.141: 43.9367% ( 178) 00:12:24.739 12046.141 - 12108.556: 45.5582% ( 165) 00:12:24.739 12108.556 - 12170.971: 47.3369% ( 181) 00:12:24.739 12170.971 - 12233.387: 49.1745% ( 187) 00:12:24.739 12233.387 - 12295.802: 50.8058% ( 166) 00:12:24.739 12295.802 - 12358.217: 52.5747% ( 180) 00:12:24.739 12358.217 - 12420.632: 54.4025% ( 186) 00:12:24.739 12420.632 - 12483.048: 56.0535% ( 168) 00:12:24.739 12483.048 - 12545.463: 58.1761% ( 216) 00:12:24.739 12545.463 - 12607.878: 60.2594% ( 212) 00:12:24.739 12607.878 - 12670.293: 62.1954% ( 197) 00:12:24.739 12670.293 - 12732.709: 64.2001% ( 204) 00:12:24.739 12732.709 - 12795.124: 66.2146% ( 205) 00:12:24.739 12795.124 - 12857.539: 68.1604% ( 198) 00:12:24.739 12857.539 - 12919.954: 70.1847% ( 206) 00:12:24.739 12919.954 - 12982.370: 72.3172% ( 217) 00:12:24.739 12982.370 - 13044.785: 74.5480% ( 227) 00:12:24.739 13044.785 - 13107.200: 76.4741% ( 196) 00:12:24.739 13107.200 - 13169.615: 78.4886% ( 205) 00:12:24.739 13169.615 - 13232.030: 80.4638% ( 201) 00:12:24.739 13232.030 - 13294.446: 82.5373% ( 211) 00:12:24.739 13294.446 - 13356.861: 84.2276% ( 172) 00:12:24.739 13356.861 - 13419.276: 85.7508% ( 155) 00:12:24.739 13419.276 - 13481.691: 86.8121% ( 108) 00:12:24.739 13481.691 - 13544.107: 87.9029% ( 111) 00:12:24.739 13544.107 - 13606.522: 88.7972% ( 91) 00:12:24.739 13606.522 - 13668.937: 89.6030% ( 82) 00:12:24.739 13668.937 - 13731.352: 90.5071% ( 92) 00:12:24.739 13731.352 - 13793.768: 91.3719% ( 88) 00:12:24.739 13793.768 - 13856.183: 92.2072% ( 85) 00:12:24.739 13856.183 - 13918.598: 93.0523% ( 86) 00:12:24.739 13918.598 - 13981.013: 93.8876% ( 85) 00:12:24.739 13981.013 - 14043.429: 94.8113% ( 94) 00:12:24.739 14043.429 - 14105.844: 95.4501% ( 65) 00:12:24.739 14105.844 - 14168.259: 95.9906% ( 55) 00:12:24.739 14168.259 - 14230.674: 96.5114% ( 53) 00:12:24.739 14230.674 - 14293.090: 96.7669% ( 26) 00:12:24.739 14293.090 - 14355.505: 97.0028% ( 24) 00:12:24.739 14355.505 - 14417.920: 97.2484% ( 25) 00:12:24.739 14417.920 - 14480.335: 97.4351% ( 19) 00:12:24.739 14480.335 - 14542.750: 97.5924% ( 16) 00:12:24.739 14542.750 - 14605.166: 97.7398% ( 15) 00:12:24.739 14605.166 - 14667.581: 97.8381% ( 10) 00:12:24.739 14667.581 - 14729.996: 97.9461% ( 11) 00:12:24.739 14729.996 - 14792.411: 98.0346% ( 9) 00:12:24.739 14792.411 - 14854.827: 98.1132% ( 8) 00:12:24.739 14854.827 - 14917.242: 98.1722% ( 6) 00:12:24.739 14917.242 - 14979.657: 98.2311% ( 6) 00:12:24.739 14979.657 - 15042.072: 98.2508% ( 2) 00:12:24.739 15042.072 - 15104.488: 98.2803% ( 3) 00:12:24.739 15104.488 - 15166.903: 98.2999% ( 2) 00:12:24.739 15166.903 - 15229.318: 98.3294% ( 3) 00:12:24.739 15229.318 - 15291.733: 98.3392% ( 1) 00:12:24.739 15291.733 - 15354.149: 98.3589% ( 2) 00:12:24.739 15354.149 - 15416.564: 98.3785% ( 2) 00:12:24.739 15416.564 - 15478.979: 98.4080% ( 3) 00:12:24.739 15478.979 - 15541.394: 98.4277% ( 2) 00:12:24.739 15541.394 - 15603.810: 98.4375% ( 1) 00:12:24.739 15603.810 - 15666.225: 98.4670% ( 3) 00:12:24.739 15666.225 - 15728.640: 98.5063% ( 4) 00:12:24.739 15728.640 - 15791.055: 98.5849% ( 8) 00:12:24.739 15791.055 - 15853.470: 98.6439% ( 6) 00:12:24.739 15853.470 - 15915.886: 98.6635% ( 2) 00:12:24.739 15915.886 - 15978.301: 98.6930% ( 3) 00:12:24.739 15978.301 - 16103.131: 98.7127% ( 2) 00:12:24.739 16103.131 - 16227.962: 98.7421% ( 3) 00:12:24.739 38447.787 - 38697.448: 98.7716% ( 3) 00:12:24.739 38697.448 - 38947.109: 98.8404% ( 7) 00:12:24.739 38947.109 - 39196.770: 98.8994% ( 6) 00:12:24.739 39196.770 - 39446.430: 98.9583% ( 6) 00:12:24.739 39446.430 - 39696.091: 99.0271% ( 7) 00:12:24.739 39696.091 - 39945.752: 99.0861% ( 6) 00:12:24.739 39945.752 - 40195.413: 99.1450% ( 6) 00:12:24.739 40195.413 - 40445.074: 99.2040% ( 6) 00:12:24.739 40445.074 - 40694.735: 99.2630% ( 6) 00:12:24.739 40694.735 - 40944.396: 99.3219% ( 6) 00:12:24.739 40944.396 - 41194.057: 99.3711% ( 5) 00:12:24.739 47435.581 - 47685.242: 99.4202% ( 5) 00:12:24.739 47685.242 - 47934.903: 99.4792% ( 6) 00:12:24.739 47934.903 - 48184.564: 99.5283% ( 5) 00:12:24.739 48184.564 - 48434.225: 99.5873% ( 6) 00:12:24.739 48434.225 - 48683.886: 99.6364% ( 5) 00:12:24.739 48683.886 - 48933.547: 99.7052% ( 7) 00:12:24.739 48933.547 - 49183.208: 99.7642% ( 6) 00:12:24.739 49183.208 - 49432.869: 99.8231% ( 6) 00:12:24.739 49432.869 - 49682.530: 99.8821% ( 6) 00:12:24.739 49682.530 - 49932.190: 99.9410% ( 6) 00:12:24.740 49932.190 - 50181.851: 100.0000% ( 6) 00:12:24.740 00:12:24.740 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:24.740 ============================================================================== 00:12:24.740 Range in us Cumulative IO count 00:12:24.740 9362.286 - 9424.701: 0.0098% ( 1) 00:12:24.740 9424.701 - 9487.116: 0.0197% ( 1) 00:12:24.740 9487.116 - 9549.531: 0.0393% ( 2) 00:12:24.740 9549.531 - 9611.947: 0.1179% ( 8) 00:12:24.740 9611.947 - 9674.362: 0.2653% ( 15) 00:12:24.740 9674.362 - 9736.777: 0.4619% ( 20) 00:12:24.740 9736.777 - 9799.192: 0.8943% ( 44) 00:12:24.740 9799.192 - 9861.608: 1.2677% ( 38) 00:12:24.740 9861.608 - 9924.023: 1.7787% ( 52) 00:12:24.740 9924.023 - 9986.438: 2.2897% ( 52) 00:12:24.740 9986.438 - 10048.853: 2.8400% ( 56) 00:12:24.740 10048.853 - 10111.269: 3.1741% ( 34) 00:12:24.740 10111.269 - 10173.684: 3.6950% ( 53) 00:12:24.740 10173.684 - 10236.099: 4.3436% ( 66) 00:12:24.740 10236.099 - 10298.514: 4.9528% ( 62) 00:12:24.740 10298.514 - 10360.930: 5.6112% ( 67) 00:12:24.740 10360.930 - 10423.345: 6.7020% ( 111) 00:12:24.740 10423.345 - 10485.760: 7.9403% ( 126) 00:12:24.740 10485.760 - 10548.175: 8.9033% ( 98) 00:12:24.740 10548.175 - 10610.590: 10.1022% ( 122) 00:12:24.740 10610.590 - 10673.006: 11.3601% ( 128) 00:12:24.740 10673.006 - 10735.421: 12.5983% ( 126) 00:12:24.740 10735.421 - 10797.836: 13.7873% ( 121) 00:12:24.740 10797.836 - 10860.251: 14.7602% ( 99) 00:12:24.740 10860.251 - 10922.667: 15.8805% ( 114) 00:12:24.740 10922.667 - 10985.082: 16.8927% ( 103) 00:12:24.740 10985.082 - 11047.497: 18.1800% ( 131) 00:12:24.740 11047.497 - 11109.912: 19.2119% ( 105) 00:12:24.740 11109.912 - 11172.328: 20.4599% ( 127) 00:12:24.740 11172.328 - 11234.743: 22.0028% ( 157) 00:12:24.740 11234.743 - 11297.158: 23.7323% ( 176) 00:12:24.740 11297.158 - 11359.573: 25.3636% ( 166) 00:12:24.740 11359.573 - 11421.989: 27.1521% ( 182) 00:12:24.740 11421.989 - 11484.404: 28.8129% ( 169) 00:12:24.740 11484.404 - 11546.819: 30.6211% ( 184) 00:12:24.740 11546.819 - 11609.234: 32.3899% ( 180) 00:12:24.740 11609.234 - 11671.650: 34.1392% ( 178) 00:12:24.740 11671.650 - 11734.065: 36.1439% ( 204) 00:12:24.740 11734.065 - 11796.480: 37.8734% ( 176) 00:12:24.740 11796.480 - 11858.895: 39.6718% ( 183) 00:12:24.740 11858.895 - 11921.310: 41.4210% ( 178) 00:12:24.740 11921.310 - 11983.726: 43.1211% ( 173) 00:12:24.740 11983.726 - 12046.141: 44.9292% ( 184) 00:12:24.740 12046.141 - 12108.556: 46.7374% ( 184) 00:12:24.740 12108.556 - 12170.971: 48.2704% ( 156) 00:12:24.740 12170.971 - 12233.387: 50.0197% ( 178) 00:12:24.740 12233.387 - 12295.802: 51.7590% ( 177) 00:12:24.740 12295.802 - 12358.217: 53.5083% ( 178) 00:12:24.740 12358.217 - 12420.632: 55.2083% ( 173) 00:12:24.740 12420.632 - 12483.048: 56.9969% ( 182) 00:12:24.740 12483.048 - 12545.463: 58.7264% ( 176) 00:12:24.740 12545.463 - 12607.878: 60.6525% ( 196) 00:12:24.740 12607.878 - 12670.293: 62.5393% ( 192) 00:12:24.740 12670.293 - 12732.709: 64.4359% ( 193) 00:12:24.740 12732.709 - 12795.124: 66.3620% ( 196) 00:12:24.740 12795.124 - 12857.539: 68.4847% ( 216) 00:12:24.740 12857.539 - 12919.954: 70.6368% ( 219) 00:12:24.740 12919.954 - 12982.370: 72.5531% ( 195) 00:12:24.740 12982.370 - 13044.785: 74.4006% ( 188) 00:12:24.740 13044.785 - 13107.200: 76.2677% ( 190) 00:12:24.740 13107.200 - 13169.615: 78.0366% ( 180) 00:12:24.740 13169.615 - 13232.030: 79.8153% ( 181) 00:12:24.740 13232.030 - 13294.446: 81.5350% ( 175) 00:12:24.740 13294.446 - 13356.861: 83.4414% ( 194) 00:12:24.740 13356.861 - 13419.276: 84.9941% ( 158) 00:12:24.740 13419.276 - 13481.691: 86.2520% ( 128) 00:12:24.740 13481.691 - 13544.107: 87.4902% ( 126) 00:12:24.740 13544.107 - 13606.522: 88.6596% ( 119) 00:12:24.740 13606.522 - 13668.937: 89.7504% ( 111) 00:12:24.740 13668.937 - 13731.352: 90.7331% ( 100) 00:12:24.740 13731.352 - 13793.768: 91.6274% ( 91) 00:12:24.740 13793.768 - 13856.183: 92.5118% ( 90) 00:12:24.740 13856.183 - 13918.598: 93.1309% ( 63) 00:12:24.740 13918.598 - 13981.013: 93.7598% ( 64) 00:12:24.740 13981.013 - 14043.429: 94.2807% ( 53) 00:12:24.740 14043.429 - 14105.844: 94.8211% ( 55) 00:12:24.740 14105.844 - 14168.259: 95.2634% ( 45) 00:12:24.740 14168.259 - 14230.674: 95.6663% ( 41) 00:12:24.740 14230.674 - 14293.090: 96.0102% ( 35) 00:12:24.740 14293.090 - 14355.505: 96.2362% ( 23) 00:12:24.740 14355.505 - 14417.920: 96.4917% ( 26) 00:12:24.740 14417.920 - 14480.335: 96.6686% ( 18) 00:12:24.740 14480.335 - 14542.750: 96.8357% ( 17) 00:12:24.740 14542.750 - 14605.166: 97.0519% ( 22) 00:12:24.740 14605.166 - 14667.581: 97.2091% ( 16) 00:12:24.740 14667.581 - 14729.996: 97.3958% ( 19) 00:12:24.740 14729.996 - 14792.411: 97.5138% ( 12) 00:12:24.740 14792.411 - 14854.827: 97.6120% ( 10) 00:12:24.740 14854.827 - 14917.242: 97.8479% ( 24) 00:12:24.740 14917.242 - 14979.657: 97.9461% ( 10) 00:12:24.740 14979.657 - 15042.072: 98.0149% ( 7) 00:12:24.740 15042.072 - 15104.488: 98.0739% ( 6) 00:12:24.740 15104.488 - 15166.903: 98.1525% ( 8) 00:12:24.740 15166.903 - 15229.318: 98.1820% ( 3) 00:12:24.740 15229.318 - 15291.733: 98.2410% ( 6) 00:12:24.740 15291.733 - 15354.149: 98.2901% ( 5) 00:12:24.740 15354.149 - 15416.564: 98.3097% ( 2) 00:12:24.740 15416.564 - 15478.979: 98.3196% ( 1) 00:12:24.740 15478.979 - 15541.394: 98.3392% ( 2) 00:12:24.740 15541.394 - 15603.810: 98.3687% ( 3) 00:12:24.740 15603.810 - 15666.225: 98.3884% ( 2) 00:12:24.740 15666.225 - 15728.640: 98.4178% ( 3) 00:12:24.740 15728.640 - 15791.055: 98.4375% ( 2) 00:12:24.740 15791.055 - 15853.470: 98.4866% ( 5) 00:12:24.740 15853.470 - 15915.886: 98.5456% ( 6) 00:12:24.740 15915.886 - 15978.301: 98.5849% ( 4) 00:12:24.740 15978.301 - 16103.131: 98.6930% ( 11) 00:12:24.740 16103.131 - 16227.962: 98.7421% ( 5) 00:12:24.740 36200.838 - 36450.499: 98.7618% ( 2) 00:12:24.740 36450.499 - 36700.160: 98.8306% ( 7) 00:12:24.740 36700.160 - 36949.821: 98.8895% ( 6) 00:12:24.740 36949.821 - 37199.482: 98.9485% ( 6) 00:12:24.740 37199.482 - 37449.143: 99.0075% ( 6) 00:12:24.740 37449.143 - 37698.804: 99.0664% ( 6) 00:12:24.740 37698.804 - 37948.465: 99.1254% ( 6) 00:12:24.740 37948.465 - 38198.126: 99.1844% ( 6) 00:12:24.740 38198.126 - 38447.787: 99.2433% ( 6) 00:12:24.740 38447.787 - 38697.448: 99.3023% ( 6) 00:12:24.740 38697.448 - 38947.109: 99.3612% ( 6) 00:12:24.740 38947.109 - 39196.770: 99.3711% ( 1) 00:12:24.740 44938.971 - 45188.632: 99.4006% ( 3) 00:12:24.740 45188.632 - 45438.293: 99.4595% ( 6) 00:12:24.740 45438.293 - 45687.954: 99.5185% ( 6) 00:12:24.740 45687.954 - 45937.615: 99.5578% ( 4) 00:12:24.740 45937.615 - 46187.276: 99.6266% ( 7) 00:12:24.740 46187.276 - 46436.937: 99.6757% ( 5) 00:12:24.740 46436.937 - 46686.598: 99.7347% ( 6) 00:12:24.740 46686.598 - 46936.259: 99.7936% ( 6) 00:12:24.740 46936.259 - 47185.920: 99.8526% ( 6) 00:12:24.740 47185.920 - 47435.581: 99.9116% ( 6) 00:12:24.740 47435.581 - 47685.242: 99.9803% ( 7) 00:12:24.740 47685.242 - 47934.903: 100.0000% ( 2) 00:12:24.740 00:12:24.740 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:24.740 ============================================================================== 00:12:24.740 Range in us Cumulative IO count 00:12:24.740 9299.870 - 9362.286: 0.0098% ( 1) 00:12:24.740 9362.286 - 9424.701: 0.0197% ( 1) 00:12:24.740 9424.701 - 9487.116: 0.0590% ( 4) 00:12:24.740 9487.116 - 9549.531: 0.1278% ( 7) 00:12:24.740 9549.531 - 9611.947: 0.2162% ( 9) 00:12:24.740 9611.947 - 9674.362: 0.5896% ( 38) 00:12:24.740 9674.362 - 9736.777: 0.7960% ( 21) 00:12:24.740 9736.777 - 9799.192: 0.9925% ( 20) 00:12:24.740 9799.192 - 9861.608: 1.2087% ( 22) 00:12:24.740 9861.608 - 9924.023: 1.8966% ( 70) 00:12:24.740 9924.023 - 9986.438: 2.4469% ( 56) 00:12:24.740 9986.438 - 10048.853: 2.7221% ( 28) 00:12:24.740 10048.853 - 10111.269: 3.2036% ( 49) 00:12:24.740 10111.269 - 10173.684: 3.7048% ( 51) 00:12:24.740 10173.684 - 10236.099: 4.3141% ( 62) 00:12:24.740 10236.099 - 10298.514: 4.9332% ( 63) 00:12:24.740 10298.514 - 10360.930: 5.8766% ( 96) 00:12:24.740 10360.930 - 10423.345: 6.6922% ( 83) 00:12:24.740 10423.345 - 10485.760: 7.6749% ( 100) 00:12:24.740 10485.760 - 10548.175: 8.8640% ( 121) 00:12:24.740 10548.175 - 10610.590: 10.0924% ( 125) 00:12:24.740 10610.590 - 10673.006: 11.5959% ( 153) 00:12:24.740 10673.006 - 10735.421: 12.6376% ( 106) 00:12:24.740 10735.421 - 10797.836: 13.6498% ( 103) 00:12:24.740 10797.836 - 10860.251: 14.7013% ( 107) 00:12:24.740 10860.251 - 10922.667: 15.8117% ( 113) 00:12:24.741 10922.667 - 10985.082: 16.8141% ( 102) 00:12:24.741 10985.082 - 11047.497: 18.0818% ( 129) 00:12:24.741 11047.497 - 11109.912: 19.2708% ( 121) 00:12:24.741 11109.912 - 11172.328: 20.7940% ( 155) 00:12:24.741 11172.328 - 11234.743: 21.8947% ( 112) 00:12:24.741 11234.743 - 11297.158: 23.0936% ( 122) 00:12:24.741 11297.158 - 11359.573: 24.7838% ( 172) 00:12:24.741 11359.573 - 11421.989: 26.4937% ( 174) 00:12:24.741 11421.989 - 11484.404: 28.4395% ( 198) 00:12:24.741 11484.404 - 11546.819: 30.1985% ( 179) 00:12:24.741 11546.819 - 11609.234: 32.3605% ( 220) 00:12:24.741 11609.234 - 11671.650: 34.2178% ( 189) 00:12:24.741 11671.650 - 11734.065: 35.8589% ( 167) 00:12:24.741 11734.065 - 11796.480: 37.5393% ( 171) 00:12:24.741 11796.480 - 11858.895: 39.4556% ( 195) 00:12:24.741 11858.895 - 11921.310: 41.1360% ( 171) 00:12:24.741 11921.310 - 11983.726: 42.8852% ( 178) 00:12:24.741 11983.726 - 12046.141: 44.7524% ( 190) 00:12:24.741 12046.141 - 12108.556: 46.5998% ( 188) 00:12:24.741 12108.556 - 12170.971: 48.2999% ( 173) 00:12:24.741 12170.971 - 12233.387: 50.0098% ( 174) 00:12:24.741 12233.387 - 12295.802: 51.6215% ( 164) 00:12:24.741 12295.802 - 12358.217: 53.3117% ( 172) 00:12:24.741 12358.217 - 12420.632: 55.1395% ( 186) 00:12:24.741 12420.632 - 12483.048: 56.6824% ( 157) 00:12:24.741 12483.048 - 12545.463: 58.3825% ( 173) 00:12:24.741 12545.463 - 12607.878: 60.3479% ( 200) 00:12:24.741 12607.878 - 12670.293: 62.4705% ( 216) 00:12:24.741 12670.293 - 12732.709: 64.4949% ( 206) 00:12:24.741 12732.709 - 12795.124: 66.3817% ( 192) 00:12:24.741 12795.124 - 12857.539: 68.4748% ( 213) 00:12:24.741 12857.539 - 12919.954: 70.5483% ( 211) 00:12:24.741 12919.954 - 12982.370: 72.5727% ( 206) 00:12:24.741 12982.370 - 13044.785: 74.2335% ( 169) 00:12:24.741 13044.785 - 13107.200: 76.2480% ( 205) 00:12:24.741 13107.200 - 13169.615: 78.1250% ( 191) 00:12:24.741 13169.615 - 13232.030: 80.0118% ( 192) 00:12:24.741 13232.030 - 13294.446: 81.6922% ( 171) 00:12:24.741 13294.446 - 13356.861: 83.4709% ( 181) 00:12:24.741 13356.861 - 13419.276: 84.9646% ( 152) 00:12:24.741 13419.276 - 13481.691: 86.3208% ( 138) 00:12:24.741 13481.691 - 13544.107: 87.5688% ( 127) 00:12:24.741 13544.107 - 13606.522: 88.8267% ( 128) 00:12:24.741 13606.522 - 13668.937: 89.9273% ( 112) 00:12:24.741 13668.937 - 13731.352: 90.9395% ( 103) 00:12:24.741 13731.352 - 13793.768: 91.7453% ( 82) 00:12:24.741 13793.768 - 13856.183: 92.5118% ( 78) 00:12:24.741 13856.183 - 13918.598: 93.2095% ( 71) 00:12:24.741 13918.598 - 13981.013: 93.7402% ( 54) 00:12:24.741 13981.013 - 14043.429: 94.2119% ( 48) 00:12:24.741 14043.429 - 14105.844: 94.6737% ( 47) 00:12:24.741 14105.844 - 14168.259: 95.1061% ( 44) 00:12:24.741 14168.259 - 14230.674: 95.5090% ( 41) 00:12:24.741 14230.674 - 14293.090: 95.8235% ( 32) 00:12:24.741 14293.090 - 14355.505: 96.1281% ( 31) 00:12:24.741 14355.505 - 14417.920: 96.4230% ( 30) 00:12:24.741 14417.920 - 14480.335: 96.6686% ( 25) 00:12:24.741 14480.335 - 14542.750: 96.8652% ( 20) 00:12:24.741 14542.750 - 14605.166: 97.0028% ( 14) 00:12:24.741 14605.166 - 14667.581: 97.1993% ( 20) 00:12:24.741 14667.581 - 14729.996: 97.4351% ( 24) 00:12:24.741 14729.996 - 14792.411: 97.5629% ( 13) 00:12:24.741 14792.411 - 14854.827: 97.6710% ( 11) 00:12:24.741 14854.827 - 14917.242: 97.7594% ( 9) 00:12:24.741 14917.242 - 14979.657: 97.8282% ( 7) 00:12:24.741 14979.657 - 15042.072: 97.9167% ( 9) 00:12:24.741 15042.072 - 15104.488: 98.0051% ( 9) 00:12:24.741 15104.488 - 15166.903: 98.0837% ( 8) 00:12:24.741 15166.903 - 15229.318: 98.0936% ( 1) 00:12:24.741 15229.318 - 15291.733: 98.1132% ( 2) 00:12:24.741 15603.810 - 15666.225: 98.1230% ( 1) 00:12:24.741 15728.640 - 15791.055: 98.1329% ( 1) 00:12:24.741 15791.055 - 15853.470: 98.1623% ( 3) 00:12:24.741 15853.470 - 15915.886: 98.1918% ( 3) 00:12:24.741 15915.886 - 15978.301: 98.2115% ( 2) 00:12:24.741 15978.301 - 16103.131: 98.2508% ( 4) 00:12:24.741 16103.131 - 16227.962: 98.2901% ( 4) 00:12:24.741 16227.962 - 16352.792: 98.3392% ( 5) 00:12:24.741 16352.792 - 16477.623: 98.4080% ( 7) 00:12:24.741 16477.623 - 16602.453: 98.5456% ( 14) 00:12:24.741 16602.453 - 16727.284: 98.6144% ( 7) 00:12:24.741 16727.284 - 16852.114: 98.6832% ( 7) 00:12:24.741 16852.114 - 16976.945: 98.7225% ( 4) 00:12:24.741 16976.945 - 17101.775: 98.7421% ( 2) 00:12:24.741 33204.907 - 33454.568: 98.7618% ( 2) 00:12:24.741 33454.568 - 33704.229: 98.8306% ( 7) 00:12:24.741 33704.229 - 33953.890: 98.8895% ( 6) 00:12:24.741 33953.890 - 34203.550: 98.9485% ( 6) 00:12:24.741 34203.550 - 34453.211: 99.0173% ( 7) 00:12:24.741 34453.211 - 34702.872: 99.0664% ( 5) 00:12:24.741 34702.872 - 34952.533: 99.1352% ( 7) 00:12:24.741 34952.533 - 35202.194: 99.1942% ( 6) 00:12:24.741 35202.194 - 35451.855: 99.2531% ( 6) 00:12:24.741 35451.855 - 35701.516: 99.3121% ( 6) 00:12:24.741 35701.516 - 35951.177: 99.3711% ( 6) 00:12:24.741 41943.040 - 42192.701: 99.4006% ( 3) 00:12:24.741 42192.701 - 42442.362: 99.4595% ( 6) 00:12:24.741 42442.362 - 42692.023: 99.5185% ( 6) 00:12:24.741 42692.023 - 42941.684: 99.5676% ( 5) 00:12:24.741 42941.684 - 43191.345: 99.6364% ( 7) 00:12:24.741 43191.345 - 43441.006: 99.7052% ( 7) 00:12:24.741 43441.006 - 43690.667: 99.7642% ( 6) 00:12:24.741 43690.667 - 43940.328: 99.8231% ( 6) 00:12:24.741 43940.328 - 44189.989: 99.8821% ( 6) 00:12:24.741 44189.989 - 44439.650: 99.9410% ( 6) 00:12:24.741 44439.650 - 44689.310: 100.0000% ( 6) 00:12:24.741 00:12:24.741 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:24.741 ============================================================================== 00:12:24.741 Range in us Cumulative IO count 00:12:24.741 9237.455 - 9299.870: 0.0098% ( 1) 00:12:24.741 9299.870 - 9362.286: 0.0590% ( 5) 00:12:24.741 9362.286 - 9424.701: 0.0884% ( 3) 00:12:24.741 9424.701 - 9487.116: 0.1572% ( 7) 00:12:24.741 9487.116 - 9549.531: 0.3931% ( 24) 00:12:24.741 9549.531 - 9611.947: 0.5110% ( 12) 00:12:24.741 9611.947 - 9674.362: 0.6388% ( 13) 00:12:24.741 9674.362 - 9736.777: 0.8943% ( 26) 00:12:24.741 9736.777 - 9799.192: 1.1399% ( 25) 00:12:24.741 9799.192 - 9861.608: 1.4151% ( 28) 00:12:24.741 9861.608 - 9924.023: 1.9851% ( 58) 00:12:24.741 9924.023 - 9986.438: 2.6238% ( 65) 00:12:24.741 9986.438 - 10048.853: 2.8400% ( 22) 00:12:24.741 10048.853 - 10111.269: 3.1250% ( 29) 00:12:24.741 10111.269 - 10173.684: 3.5476% ( 43) 00:12:24.741 10173.684 - 10236.099: 3.9898% ( 45) 00:12:24.741 10236.099 - 10298.514: 4.4811% ( 50) 00:12:24.741 10298.514 - 10360.930: 5.3557% ( 89) 00:12:24.741 10360.930 - 10423.345: 6.5939% ( 126) 00:12:24.741 10423.345 - 10485.760: 7.8813% ( 131) 00:12:24.741 10485.760 - 10548.175: 8.8050% ( 94) 00:12:24.741 10548.175 - 10610.590: 9.9155% ( 113) 00:12:24.741 10610.590 - 10673.006: 11.1242% ( 123) 00:12:24.741 10673.006 - 10735.421: 12.2248% ( 112) 00:12:24.741 10735.421 - 10797.836: 13.5417% ( 134) 00:12:24.741 10797.836 - 10860.251: 14.8880% ( 137) 00:12:24.741 10860.251 - 10922.667: 16.0574% ( 119) 00:12:24.741 10922.667 - 10985.082: 17.2563% ( 122) 00:12:24.741 10985.082 - 11047.497: 18.9171% ( 169) 00:12:24.741 11047.497 - 11109.912: 20.1061% ( 121) 00:12:24.741 11109.912 - 11172.328: 21.2854% ( 120) 00:12:24.741 11172.328 - 11234.743: 22.4548% ( 119) 00:12:24.741 11234.743 - 11297.158: 23.6930% ( 126) 00:12:24.741 11297.158 - 11359.573: 25.1867% ( 152) 00:12:24.741 11359.573 - 11421.989: 26.7787% ( 162) 00:12:24.741 11421.989 - 11484.404: 28.4198% ( 167) 00:12:24.741 11484.404 - 11546.819: 30.1002% ( 171) 00:12:24.741 11546.819 - 11609.234: 31.7708% ( 170) 00:12:24.741 11609.234 - 11671.650: 33.2547% ( 151) 00:12:24.741 11671.650 - 11734.065: 35.1808% ( 196) 00:12:24.741 11734.065 - 11796.480: 37.0873% ( 194) 00:12:24.741 11796.480 - 11858.895: 38.9347% ( 188) 00:12:24.741 11858.895 - 11921.310: 40.6447% ( 174) 00:12:24.741 11921.310 - 11983.726: 42.5118% ( 190) 00:12:24.741 11983.726 - 12046.141: 44.3888% ( 191) 00:12:24.741 12046.141 - 12108.556: 46.1478% ( 179) 00:12:24.741 12108.556 - 12170.971: 47.5531% ( 143) 00:12:24.741 12170.971 - 12233.387: 49.2728% ( 175) 00:12:24.742 12233.387 - 12295.802: 50.8550% ( 161) 00:12:24.742 12295.802 - 12358.217: 53.1447% ( 233) 00:12:24.742 12358.217 - 12420.632: 55.0708% ( 196) 00:12:24.742 12420.632 - 12483.048: 56.7119% ( 167) 00:12:24.742 12483.048 - 12545.463: 58.4807% ( 180) 00:12:24.742 12545.463 - 12607.878: 60.4363% ( 199) 00:12:24.742 12607.878 - 12670.293: 62.2347% ( 183) 00:12:24.742 12670.293 - 12732.709: 64.0723% ( 187) 00:12:24.742 12732.709 - 12795.124: 66.0574% ( 202) 00:12:24.742 12795.124 - 12857.539: 68.0916% ( 207) 00:12:24.742 12857.539 - 12919.954: 70.2339% ( 218) 00:12:24.742 12919.954 - 12982.370: 72.4843% ( 229) 00:12:24.742 12982.370 - 13044.785: 74.5578% ( 211) 00:12:24.742 13044.785 - 13107.200: 76.4347% ( 191) 00:12:24.742 13107.200 - 13169.615: 78.3117% ( 191) 00:12:24.742 13169.615 - 13232.030: 80.0609% ( 178) 00:12:24.742 13232.030 - 13294.446: 81.6038% ( 157) 00:12:24.742 13294.446 - 13356.861: 83.3137% ( 174) 00:12:24.742 13356.861 - 13419.276: 84.9351% ( 165) 00:12:24.742 13419.276 - 13481.691: 86.2323% ( 132) 00:12:24.742 13481.691 - 13544.107: 87.4116% ( 120) 00:12:24.742 13544.107 - 13606.522: 88.5908% ( 120) 00:12:24.742 13606.522 - 13668.937: 89.7111% ( 114) 00:12:24.742 13668.937 - 13731.352: 90.6447% ( 95) 00:12:24.742 13731.352 - 13793.768: 91.6274% ( 100) 00:12:24.742 13793.768 - 13856.183: 92.4135% ( 80) 00:12:24.742 13856.183 - 13918.598: 93.1407% ( 74) 00:12:24.742 13918.598 - 13981.013: 93.7893% ( 66) 00:12:24.742 13981.013 - 14043.429: 94.3003% ( 52) 00:12:24.742 14043.429 - 14105.844: 94.8310% ( 54) 00:12:24.742 14105.844 - 14168.259: 95.2732% ( 45) 00:12:24.742 14168.259 - 14230.674: 95.6859% ( 42) 00:12:24.742 14230.674 - 14293.090: 96.0299% ( 35) 00:12:24.742 14293.090 - 14355.505: 96.2952% ( 27) 00:12:24.742 14355.505 - 14417.920: 96.5409% ( 25) 00:12:24.742 14417.920 - 14480.335: 96.7767% ( 24) 00:12:24.742 14480.335 - 14542.750: 97.1403% ( 37) 00:12:24.742 14542.750 - 14605.166: 97.3369% ( 20) 00:12:24.742 14605.166 - 14667.581: 97.5138% ( 18) 00:12:24.742 14667.581 - 14729.996: 97.6415% ( 13) 00:12:24.742 14729.996 - 14792.411: 97.7300% ( 9) 00:12:24.742 14792.411 - 14854.827: 97.8086% ( 8) 00:12:24.742 14854.827 - 14917.242: 97.8774% ( 7) 00:12:24.742 14917.242 - 14979.657: 97.9363% ( 6) 00:12:24.742 14979.657 - 15042.072: 97.9658% ( 3) 00:12:24.742 15042.072 - 15104.488: 97.9953% ( 3) 00:12:24.742 15104.488 - 15166.903: 98.0248% ( 3) 00:12:24.742 15166.903 - 15229.318: 98.0542% ( 3) 00:12:24.742 15229.318 - 15291.733: 98.0837% ( 3) 00:12:24.742 15291.733 - 15354.149: 98.1132% ( 3) 00:12:24.742 16352.792 - 16477.623: 98.1722% ( 6) 00:12:24.742 16477.623 - 16602.453: 98.2115% ( 4) 00:12:24.742 16602.453 - 16727.284: 98.2508% ( 4) 00:12:24.742 16727.284 - 16852.114: 98.2999% ( 5) 00:12:24.742 16852.114 - 16976.945: 98.3392% ( 4) 00:12:24.742 16976.945 - 17101.775: 98.3982% ( 6) 00:12:24.742 17101.775 - 17226.606: 98.4768% ( 8) 00:12:24.742 17226.606 - 17351.436: 98.6046% ( 13) 00:12:24.742 17351.436 - 17476.267: 98.7028% ( 10) 00:12:24.742 17476.267 - 17601.097: 98.7421% ( 4) 00:12:24.742 30208.975 - 30333.806: 98.7716% ( 3) 00:12:24.742 30333.806 - 30458.636: 98.8011% ( 3) 00:12:24.742 30458.636 - 30583.467: 98.8306% ( 3) 00:12:24.742 30583.467 - 30708.297: 98.8601% ( 3) 00:12:24.742 30708.297 - 30833.128: 98.8797% ( 2) 00:12:24.742 30833.128 - 30957.958: 98.9190% ( 4) 00:12:24.742 30957.958 - 31082.789: 98.9485% ( 3) 00:12:24.742 31082.789 - 31207.619: 98.9780% ( 3) 00:12:24.742 31207.619 - 31332.450: 99.0075% ( 3) 00:12:24.742 31332.450 - 31457.280: 99.0369% ( 3) 00:12:24.742 31457.280 - 31582.110: 99.0664% ( 3) 00:12:24.742 31582.110 - 31706.941: 99.0959% ( 3) 00:12:24.742 31706.941 - 31831.771: 99.1254% ( 3) 00:12:24.742 31831.771 - 31956.602: 99.1549% ( 3) 00:12:24.742 31956.602 - 32206.263: 99.2040% ( 5) 00:12:24.742 32206.263 - 32455.924: 99.2630% ( 6) 00:12:24.742 32455.924 - 32705.585: 99.3318% ( 7) 00:12:24.742 32705.585 - 32955.246: 99.3711% ( 4) 00:12:24.742 39196.770 - 39446.430: 99.4202% ( 5) 00:12:24.742 39446.430 - 39696.091: 99.4890% ( 7) 00:12:24.742 39696.091 - 39945.752: 99.5283% ( 4) 00:12:24.742 39945.752 - 40195.413: 99.5873% ( 6) 00:12:24.742 40195.413 - 40445.074: 99.6462% ( 6) 00:12:24.742 40445.074 - 40694.735: 99.7052% ( 6) 00:12:24.742 40694.735 - 40944.396: 99.7642% ( 6) 00:12:24.742 40944.396 - 41194.057: 99.8231% ( 6) 00:12:24.742 41194.057 - 41443.718: 99.8821% ( 6) 00:12:24.742 41443.718 - 41693.379: 99.9410% ( 6) 00:12:24.742 41693.379 - 41943.040: 100.0000% ( 6) 00:12:24.742 00:12:24.742 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:24.742 ============================================================================== 00:12:24.742 Range in us Cumulative IO count 00:12:24.742 9175.040 - 9237.455: 0.0098% ( 1) 00:12:24.742 9362.286 - 9424.701: 0.0781% ( 7) 00:12:24.742 9424.701 - 9487.116: 0.1953% ( 12) 00:12:24.742 9487.116 - 9549.531: 0.2734% ( 8) 00:12:24.742 9549.531 - 9611.947: 0.4492% ( 18) 00:12:24.742 9611.947 - 9674.362: 0.7324% ( 29) 00:12:24.742 9674.362 - 9736.777: 1.0840% ( 36) 00:12:24.742 9736.777 - 9799.192: 1.3184% ( 24) 00:12:24.742 9799.192 - 9861.608: 1.7773% ( 47) 00:12:24.742 9861.608 - 9924.023: 2.3828% ( 62) 00:12:24.742 9924.023 - 9986.438: 2.6172% ( 24) 00:12:24.742 9986.438 - 10048.853: 2.8613% ( 25) 00:12:24.742 10048.853 - 10111.269: 3.3301% ( 48) 00:12:24.742 10111.269 - 10173.684: 3.7402% ( 42) 00:12:24.742 10173.684 - 10236.099: 4.3262% ( 60) 00:12:24.742 10236.099 - 10298.514: 5.1562% ( 85) 00:12:24.742 10298.514 - 10360.930: 5.9180% ( 78) 00:12:24.742 10360.930 - 10423.345: 7.0703% ( 118) 00:12:24.742 10423.345 - 10485.760: 8.1738% ( 113) 00:12:24.742 10485.760 - 10548.175: 8.9941% ( 84) 00:12:24.742 10548.175 - 10610.590: 9.8828% ( 91) 00:12:24.742 10610.590 - 10673.006: 10.8301% ( 97) 00:12:24.742 10673.006 - 10735.421: 12.0215% ( 122) 00:12:24.742 10735.421 - 10797.836: 13.5645% ( 158) 00:12:24.742 10797.836 - 10860.251: 14.5605% ( 102) 00:12:24.742 10860.251 - 10922.667: 15.9473% ( 142) 00:12:24.742 10922.667 - 10985.082: 17.5391% ( 163) 00:12:24.742 10985.082 - 11047.497: 18.8184% ( 131) 00:12:24.742 11047.497 - 11109.912: 20.0879% ( 130) 00:12:24.742 11109.912 - 11172.328: 21.2012% ( 114) 00:12:24.742 11172.328 - 11234.743: 22.2754% ( 110) 00:12:24.742 11234.743 - 11297.158: 23.5059% ( 126) 00:12:24.742 11297.158 - 11359.573: 24.9902% ( 152) 00:12:24.742 11359.573 - 11421.989: 26.7090% ( 176) 00:12:24.742 11421.989 - 11484.404: 28.8086% ( 215) 00:12:24.742 11484.404 - 11546.819: 30.6348% ( 187) 00:12:24.742 11546.819 - 11609.234: 32.4316% ( 184) 00:12:24.742 11609.234 - 11671.650: 34.0332% ( 164) 00:12:24.742 11671.650 - 11734.065: 35.7617% ( 177) 00:12:24.742 11734.065 - 11796.480: 37.3438% ( 162) 00:12:24.742 11796.480 - 11858.895: 38.8867% ( 158) 00:12:24.742 11858.895 - 11921.310: 40.7129% ( 187) 00:12:24.742 11921.310 - 11983.726: 42.4316% ( 176) 00:12:24.742 11983.726 - 12046.141: 43.9258% ( 153) 00:12:24.742 12046.141 - 12108.556: 45.4004% ( 151) 00:12:24.742 12108.556 - 12170.971: 47.2168% ( 186) 00:12:24.742 12170.971 - 12233.387: 49.0527% ( 188) 00:12:24.742 12233.387 - 12295.802: 50.8105% ( 180) 00:12:24.742 12295.802 - 12358.217: 52.6172% ( 185) 00:12:24.742 12358.217 - 12420.632: 54.2383% ( 166) 00:12:24.742 12420.632 - 12483.048: 55.8789% ( 168) 00:12:24.742 12483.048 - 12545.463: 57.5684% ( 173) 00:12:24.742 12545.463 - 12607.878: 59.2578% ( 173) 00:12:24.742 12607.878 - 12670.293: 61.2305% ( 202) 00:12:24.742 12670.293 - 12732.709: 63.4961% ( 232) 00:12:24.742 12732.709 - 12795.124: 65.7520% ( 231) 00:12:24.742 12795.124 - 12857.539: 67.6074% ( 190) 00:12:24.742 12857.539 - 12919.954: 69.5410% ( 198) 00:12:24.742 12919.954 - 12982.370: 71.4844% ( 199) 00:12:24.742 12982.370 - 13044.785: 73.3594% ( 192) 00:12:24.742 13044.785 - 13107.200: 75.2930% ( 198) 00:12:24.742 13107.200 - 13169.615: 77.2168% ( 197) 00:12:24.742 13169.615 - 13232.030: 78.9746% ( 180) 00:12:24.742 13232.030 - 13294.446: 80.7617% ( 183) 00:12:24.742 13294.446 - 13356.861: 82.5000% ( 178) 00:12:24.742 13356.861 - 13419.276: 84.2090% ( 175) 00:12:24.742 13419.276 - 13481.691: 85.7715% ( 160) 00:12:24.742 13481.691 - 13544.107: 87.2168% ( 148) 00:12:24.742 13544.107 - 13606.522: 88.6719% ( 149) 00:12:24.742 13606.522 - 13668.937: 89.8828% ( 124) 00:12:24.742 13668.937 - 13731.352: 90.8789% ( 102) 00:12:24.742 13731.352 - 13793.768: 91.7578% ( 90) 00:12:24.742 13793.768 - 13856.183: 92.3828% ( 64) 00:12:24.742 13856.183 - 13918.598: 92.9785% ( 61) 00:12:24.742 13918.598 - 13981.013: 93.6035% ( 64) 00:12:24.742 13981.013 - 14043.429: 94.2383% ( 65) 00:12:24.742 14043.429 - 14105.844: 94.8535% ( 63) 00:12:24.742 14105.844 - 14168.259: 95.4688% ( 63) 00:12:24.742 14168.259 - 14230.674: 95.9082% ( 45) 00:12:24.742 14230.674 - 14293.090: 96.2598% ( 36) 00:12:24.742 14293.090 - 14355.505: 96.5430% ( 29) 00:12:24.742 14355.505 - 14417.920: 96.7676% ( 23) 00:12:24.742 14417.920 - 14480.335: 96.9824% ( 22) 00:12:24.742 14480.335 - 14542.750: 97.1387% ( 16) 00:12:24.742 14542.750 - 14605.166: 97.3047% ( 17) 00:12:24.742 14605.166 - 14667.581: 97.4512% ( 15) 00:12:24.743 14667.581 - 14729.996: 97.6074% ( 16) 00:12:24.743 14729.996 - 14792.411: 97.7051% ( 10) 00:12:24.743 14792.411 - 14854.827: 97.7734% ( 7) 00:12:24.743 14854.827 - 14917.242: 97.8711% ( 10) 00:12:24.743 14917.242 - 14979.657: 97.9492% ( 8) 00:12:24.743 14979.657 - 15042.072: 98.0176% ( 7) 00:12:24.743 15042.072 - 15104.488: 98.0469% ( 3) 00:12:24.743 15104.488 - 15166.903: 98.1055% ( 6) 00:12:24.743 15166.903 - 15229.318: 98.1250% ( 2) 00:12:24.743 16976.945 - 17101.775: 98.1641% ( 4) 00:12:24.743 17101.775 - 17226.606: 98.2031% ( 4) 00:12:24.743 17226.606 - 17351.436: 98.2715% ( 7) 00:12:24.743 17351.436 - 17476.267: 98.3105% ( 4) 00:12:24.743 17476.267 - 17601.097: 98.3496% ( 4) 00:12:24.743 17601.097 - 17725.928: 98.3887% ( 4) 00:12:24.743 17725.928 - 17850.758: 98.4375% ( 5) 00:12:24.743 17850.758 - 17975.589: 98.4766% ( 4) 00:12:24.743 17975.589 - 18100.419: 98.5156% ( 4) 00:12:24.743 18100.419 - 18225.250: 98.6133% ( 10) 00:12:24.743 18225.250 - 18350.080: 98.6719% ( 6) 00:12:24.743 18350.080 - 18474.910: 98.7012% ( 3) 00:12:24.743 18474.910 - 18599.741: 98.7305% ( 3) 00:12:24.743 18599.741 - 18724.571: 98.7500% ( 2) 00:12:24.743 20472.198 - 20597.029: 98.7695% ( 2) 00:12:24.743 20597.029 - 20721.859: 98.7988% ( 3) 00:12:24.743 20721.859 - 20846.690: 98.8281% ( 3) 00:12:24.743 20846.690 - 20971.520: 98.8574% ( 3) 00:12:24.743 20971.520 - 21096.350: 98.8867% ( 3) 00:12:24.743 21096.350 - 21221.181: 98.9160% ( 3) 00:12:24.743 21221.181 - 21346.011: 98.9453% ( 3) 00:12:24.743 21346.011 - 21470.842: 98.9746% ( 3) 00:12:24.743 21470.842 - 21595.672: 98.9941% ( 2) 00:12:24.743 21595.672 - 21720.503: 99.0234% ( 3) 00:12:24.743 21720.503 - 21845.333: 99.0430% ( 2) 00:12:24.743 21845.333 - 21970.164: 99.0723% ( 3) 00:12:24.743 21970.164 - 22094.994: 99.1016% ( 3) 00:12:24.743 22094.994 - 22219.825: 99.1211% ( 2) 00:12:24.743 22219.825 - 22344.655: 99.1504% ( 3) 00:12:24.743 22344.655 - 22469.486: 99.1797% ( 3) 00:12:24.743 22469.486 - 22594.316: 99.2090% ( 3) 00:12:24.743 22594.316 - 22719.147: 99.2285% ( 2) 00:12:24.743 22719.147 - 22843.977: 99.2578% ( 3) 00:12:24.743 22843.977 - 22968.808: 99.2871% ( 3) 00:12:24.743 22968.808 - 23093.638: 99.3164% ( 3) 00:12:24.743 23093.638 - 23218.469: 99.3457% ( 3) 00:12:24.743 23218.469 - 23343.299: 99.3750% ( 3) 00:12:24.743 29335.162 - 29459.992: 99.3848% ( 1) 00:12:24.743 29459.992 - 29584.823: 99.4141% ( 3) 00:12:24.743 29584.823 - 29709.653: 99.4434% ( 3) 00:12:24.743 29709.653 - 29834.484: 99.4727% ( 3) 00:12:24.743 29834.484 - 29959.314: 99.5020% ( 3) 00:12:24.743 29959.314 - 30084.145: 99.5215% ( 2) 00:12:24.743 30084.145 - 30208.975: 99.5508% ( 3) 00:12:24.743 30208.975 - 30333.806: 99.5801% ( 3) 00:12:24.743 30333.806 - 30458.636: 99.6094% ( 3) 00:12:24.743 30458.636 - 30583.467: 99.6387% ( 3) 00:12:24.743 30583.467 - 30708.297: 99.6680% ( 3) 00:12:24.743 30708.297 - 30833.128: 99.6973% ( 3) 00:12:24.743 30833.128 - 30957.958: 99.7266% ( 3) 00:12:24.743 30957.958 - 31082.789: 99.7559% ( 3) 00:12:24.743 31082.789 - 31207.619: 99.7852% ( 3) 00:12:24.743 31207.619 - 31332.450: 99.8145% ( 3) 00:12:24.743 31332.450 - 31457.280: 99.8340% ( 2) 00:12:24.743 31457.280 - 31582.110: 99.8633% ( 3) 00:12:24.743 31582.110 - 31706.941: 99.8926% ( 3) 00:12:24.743 31706.941 - 31831.771: 99.9219% ( 3) 00:12:24.743 31831.771 - 31956.602: 99.9512% ( 3) 00:12:24.743 31956.602 - 32206.263: 100.0000% ( 5) 00:12:24.743 00:12:24.743 11:26:30 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:12:24.743 00:12:24.743 real 0m2.944s 00:12:24.743 user 0m2.401s 00:12:24.743 sys 0m0.424s 00:12:24.743 11:26:30 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.743 11:26:30 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:12:24.743 ************************************ 00:12:24.743 END TEST nvme_perf 00:12:24.743 ************************************ 00:12:24.743 11:26:30 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:24.743 11:26:30 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:24.743 11:26:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.743 11:26:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:24.743 ************************************ 00:12:24.743 START TEST nvme_hello_world 00:12:24.743 ************************************ 00:12:24.743 11:26:30 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:25.002 Initializing NVMe Controllers 00:12:25.002 Attached to 0000:00:10.0 00:12:25.002 Namespace ID: 1 size: 6GB 00:12:25.002 Attached to 0000:00:11.0 00:12:25.002 Namespace ID: 1 size: 5GB 00:12:25.002 Attached to 0000:00:13.0 00:12:25.002 Namespace ID: 1 size: 1GB 00:12:25.002 Attached to 0000:00:12.0 00:12:25.002 Namespace ID: 1 size: 4GB 00:12:25.002 Namespace ID: 2 size: 4GB 00:12:25.002 Namespace ID: 3 size: 4GB 00:12:25.002 Initialization complete. 00:12:25.002 INFO: using host memory buffer for IO 00:12:25.002 Hello world! 00:12:25.002 INFO: using host memory buffer for IO 00:12:25.002 Hello world! 00:12:25.002 INFO: using host memory buffer for IO 00:12:25.002 Hello world! 00:12:25.002 INFO: using host memory buffer for IO 00:12:25.002 Hello world! 00:12:25.002 INFO: using host memory buffer for IO 00:12:25.002 Hello world! 00:12:25.002 INFO: using host memory buffer for IO 00:12:25.002 Hello world! 00:12:25.002 ************************************ 00:12:25.002 END TEST nvme_hello_world 00:12:25.002 ************************************ 00:12:25.002 00:12:25.002 real 0m0.415s 00:12:25.002 user 0m0.156s 00:12:25.002 sys 0m0.193s 00:12:25.002 11:26:30 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.002 11:26:30 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:25.002 11:26:30 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:25.002 11:26:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:25.002 11:26:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.002 11:26:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.002 ************************************ 00:12:25.002 START TEST nvme_sgl 00:12:25.002 ************************************ 00:12:25.002 11:26:30 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:25.260 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:12:25.260 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:12:25.260 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:12:25.518 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:12:25.518 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:12:25.518 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:12:25.518 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:12:25.518 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:12:25.518 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:12:25.518 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:12:25.518 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:12:25.518 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:12:25.518 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:12:25.518 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:12:25.518 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:12:25.518 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:12:25.518 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:12:25.518 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:12:25.518 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:12:25.518 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:12:25.518 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:12:25.518 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:12:25.518 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:12:25.518 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:12:25.518 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:12:25.518 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:12:25.518 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:12:25.518 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:12:25.518 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:12:25.518 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:12:25.518 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:12:25.518 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:12:25.518 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:12:25.518 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:12:25.518 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:12:25.518 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:12:25.518 NVMe Readv/Writev Request test 00:12:25.518 Attached to 0000:00:10.0 00:12:25.518 Attached to 0000:00:11.0 00:12:25.518 Attached to 0000:00:13.0 00:12:25.518 Attached to 0000:00:12.0 00:12:25.518 0000:00:10.0: build_io_request_2 test passed 00:12:25.518 0000:00:10.0: build_io_request_4 test passed 00:12:25.518 0000:00:10.0: build_io_request_5 test passed 00:12:25.518 0000:00:10.0: build_io_request_6 test passed 00:12:25.518 0000:00:10.0: build_io_request_7 test passed 00:12:25.518 0000:00:10.0: build_io_request_10 test passed 00:12:25.518 0000:00:11.0: build_io_request_2 test passed 00:12:25.518 0000:00:11.0: build_io_request_4 test passed 00:12:25.518 0000:00:11.0: build_io_request_5 test passed 00:12:25.518 0000:00:11.0: build_io_request_6 test passed 00:12:25.518 0000:00:11.0: build_io_request_7 test passed 00:12:25.518 0000:00:11.0: build_io_request_10 test passed 00:12:25.518 Cleaning up... 00:12:25.518 00:12:25.518 real 0m0.496s 00:12:25.518 user 0m0.256s 00:12:25.518 sys 0m0.192s 00:12:25.518 11:26:31 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.518 11:26:31 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:12:25.518 ************************************ 00:12:25.518 END TEST nvme_sgl 00:12:25.518 ************************************ 00:12:25.518 11:26:31 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:25.518 11:26:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:25.518 11:26:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.518 11:26:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.518 ************************************ 00:12:25.518 START TEST nvme_e2edp 00:12:25.518 ************************************ 00:12:25.518 11:26:31 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:26.084 NVMe Write/Read with End-to-End data protection test 00:12:26.084 Attached to 0000:00:10.0 00:12:26.084 Attached to 0000:00:11.0 00:12:26.084 Attached to 0000:00:13.0 00:12:26.084 Attached to 0000:00:12.0 00:12:26.084 Cleaning up... 00:12:26.084 ************************************ 00:12:26.084 END TEST nvme_e2edp 00:12:26.084 ************************************ 00:12:26.084 00:12:26.084 real 0m0.391s 00:12:26.084 user 0m0.143s 00:12:26.084 sys 0m0.194s 00:12:26.084 11:26:31 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.084 11:26:31 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:12:26.084 11:26:31 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:26.084 11:26:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:26.084 11:26:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.084 11:26:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.084 ************************************ 00:12:26.084 START TEST nvme_reserve 00:12:26.084 ************************************ 00:12:26.084 11:26:31 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:26.342 ===================================================== 00:12:26.342 NVMe Controller at PCI bus 0, device 16, function 0 00:12:26.342 ===================================================== 00:12:26.342 Reservations: Not Supported 00:12:26.342 ===================================================== 00:12:26.342 NVMe Controller at PCI bus 0, device 17, function 0 00:12:26.342 ===================================================== 00:12:26.342 Reservations: Not Supported 00:12:26.342 ===================================================== 00:12:26.342 NVMe Controller at PCI bus 0, device 19, function 0 00:12:26.342 ===================================================== 00:12:26.342 Reservations: Not Supported 00:12:26.342 ===================================================== 00:12:26.342 NVMe Controller at PCI bus 0, device 18, function 0 00:12:26.342 ===================================================== 00:12:26.342 Reservations: Not Supported 00:12:26.342 Reservation test passed 00:12:26.342 ************************************ 00:12:26.342 END TEST nvme_reserve 00:12:26.342 ************************************ 00:12:26.342 00:12:26.342 real 0m0.342s 00:12:26.342 user 0m0.139s 00:12:26.342 sys 0m0.156s 00:12:26.342 11:26:32 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.342 11:26:32 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:12:26.342 11:26:32 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:26.342 11:26:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:26.342 11:26:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.342 11:26:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.342 ************************************ 00:12:26.342 START TEST nvme_err_injection 00:12:26.342 ************************************ 00:12:26.342 11:26:32 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:26.908 NVMe Error Injection test 00:12:26.908 Attached to 0000:00:10.0 00:12:26.908 Attached to 0000:00:11.0 00:12:26.908 Attached to 0000:00:13.0 00:12:26.908 Attached to 0000:00:12.0 00:12:26.909 0000:00:11.0: get features failed as expected 00:12:26.909 0000:00:13.0: get features failed as expected 00:12:26.909 0000:00:12.0: get features failed as expected 00:12:26.909 0000:00:10.0: get features failed as expected 00:12:26.909 0000:00:10.0: get features successfully as expected 00:12:26.909 0000:00:11.0: get features successfully as expected 00:12:26.909 0000:00:13.0: get features successfully as expected 00:12:26.909 0000:00:12.0: get features successfully as expected 00:12:26.909 0000:00:10.0: read failed as expected 00:12:26.909 0000:00:11.0: read failed as expected 00:12:26.909 0000:00:13.0: read failed as expected 00:12:26.909 0000:00:12.0: read failed as expected 00:12:26.909 0000:00:13.0: read successfully as expected 00:12:26.909 0000:00:10.0: read successfully as expected 00:12:26.909 0000:00:11.0: read successfully as expected 00:12:26.909 0000:00:12.0: read successfully as expected 00:12:26.909 Cleaning up... 00:12:26.909 ************************************ 00:12:26.909 END TEST nvme_err_injection 00:12:26.909 ************************************ 00:12:26.909 00:12:26.909 real 0m0.385s 00:12:26.909 user 0m0.144s 00:12:26.909 sys 0m0.195s 00:12:26.909 11:26:32 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.909 11:26:32 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:12:26.909 11:26:32 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:26.909 11:26:32 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:12:26.909 11:26:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.909 11:26:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.909 ************************************ 00:12:26.909 START TEST nvme_overhead 00:12:26.909 ************************************ 00:12:26.909 11:26:32 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:28.344 Initializing NVMe Controllers 00:12:28.344 Attached to 0000:00:10.0 00:12:28.344 Attached to 0000:00:11.0 00:12:28.344 Attached to 0000:00:13.0 00:12:28.344 Attached to 0000:00:12.0 00:12:28.344 Initialization complete. Launching workers. 00:12:28.344 submit (in ns) avg, min, max = 16743.6, 12881.9, 136661.0 00:12:28.344 complete (in ns) avg, min, max = 11307.1, 8708.6, 134517.1 00:12:28.344 00:12:28.344 Submit histogram 00:12:28.344 ================ 00:12:28.344 Range in us Cumulative Count 00:12:28.344 12.861 - 12.922: 0.0101% ( 1) 00:12:28.344 12.983 - 13.044: 0.0201% ( 1) 00:12:28.344 13.349 - 13.410: 0.1006% ( 8) 00:12:28.344 13.410 - 13.470: 0.2213% ( 12) 00:12:28.344 13.470 - 13.531: 0.3621% ( 14) 00:12:28.344 13.531 - 13.592: 0.5833% ( 22) 00:12:28.344 13.592 - 13.653: 0.7744% ( 19) 00:12:28.344 13.653 - 13.714: 0.8951% ( 12) 00:12:28.344 13.714 - 13.775: 1.0258% ( 13) 00:12:28.344 13.775 - 13.836: 1.1666% ( 14) 00:12:28.344 13.836 - 13.897: 1.2873% ( 12) 00:12:28.344 13.897 - 13.958: 1.4281% ( 14) 00:12:28.344 13.958 - 14.019: 1.7097% ( 28) 00:12:28.344 14.019 - 14.080: 2.4339% ( 72) 00:12:28.344 14.080 - 14.141: 3.8117% ( 137) 00:12:28.344 14.141 - 14.202: 6.1551% ( 233) 00:12:28.344 14.202 - 14.263: 9.7858% ( 361) 00:12:28.344 14.263 - 14.324: 14.0601% ( 425) 00:12:28.344 14.324 - 14.385: 18.8575% ( 477) 00:12:28.344 14.385 - 14.446: 23.3028% ( 442) 00:12:28.344 14.446 - 14.507: 27.5571% ( 423) 00:12:28.344 14.507 - 14.568: 31.6705% ( 409) 00:12:28.344 14.568 - 14.629: 34.9995% ( 331) 00:12:28.344 14.629 - 14.690: 38.6402% ( 362) 00:12:28.344 14.690 - 14.750: 42.0597% ( 340) 00:12:28.344 14.750 - 14.811: 45.9720% ( 389) 00:12:28.344 14.811 - 14.872: 49.9648% ( 397) 00:12:28.344 14.872 - 14.933: 52.6300% ( 265) 00:12:28.344 14.933 - 14.994: 55.2550% ( 261) 00:12:28.344 14.994 - 15.055: 57.3670% ( 210) 00:12:28.344 15.055 - 15.116: 58.7549% ( 138) 00:12:28.344 15.116 - 15.177: 60.0724% ( 131) 00:12:28.344 15.177 - 15.238: 61.0178% ( 94) 00:12:28.345 15.238 - 15.299: 61.8827% ( 86) 00:12:28.345 15.299 - 15.360: 62.6974% ( 81) 00:12:28.345 15.360 - 15.421: 63.3712% ( 67) 00:12:28.345 15.421 - 15.482: 63.9545% ( 58) 00:12:28.345 15.482 - 15.543: 64.3568% ( 40) 00:12:28.345 15.543 - 15.604: 64.7692% ( 41) 00:12:28.345 15.604 - 15.726: 65.3927% ( 62) 00:12:28.345 15.726 - 15.848: 65.9761% ( 58) 00:12:28.345 15.848 - 15.970: 66.3180% ( 34) 00:12:28.345 15.970 - 16.091: 66.4990% ( 18) 00:12:28.345 16.091 - 16.213: 66.5694% ( 7) 00:12:28.345 16.213 - 16.335: 66.7002% ( 13) 00:12:28.345 16.335 - 16.457: 66.7706% ( 7) 00:12:28.345 16.457 - 16.579: 66.8209% ( 5) 00:12:28.345 16.579 - 16.701: 66.8309% ( 1) 00:12:28.345 16.701 - 16.823: 66.8712% ( 4) 00:12:28.345 16.945 - 17.067: 66.9013% ( 3) 00:12:28.345 17.067 - 17.189: 66.9617% ( 6) 00:12:28.345 17.189 - 17.310: 66.9717% ( 1) 00:12:28.345 17.310 - 17.432: 66.9818% ( 1) 00:12:28.345 17.432 - 17.554: 67.0019% ( 2) 00:12:28.345 17.676 - 17.798: 67.0120% ( 1) 00:12:28.345 17.798 - 17.920: 67.0220% ( 1) 00:12:28.345 17.920 - 18.042: 67.0924% ( 7) 00:12:28.345 18.042 - 18.164: 67.1427% ( 5) 00:12:28.345 18.164 - 18.286: 67.4947% ( 35) 00:12:28.345 18.286 - 18.408: 69.2749% ( 177) 00:12:28.345 18.408 - 18.530: 72.8050% ( 351) 00:12:28.345 18.530 - 18.651: 75.8121% ( 299) 00:12:28.345 18.651 - 18.773: 78.4874% ( 266) 00:12:28.345 18.773 - 18.895: 80.8006% ( 230) 00:12:28.345 18.895 - 19.017: 82.6411% ( 183) 00:12:28.345 19.017 - 19.139: 83.8379% ( 119) 00:12:28.345 19.139 - 19.261: 84.7330% ( 89) 00:12:28.345 19.261 - 19.383: 85.4873% ( 75) 00:12:28.345 19.383 - 19.505: 85.9801% ( 49) 00:12:28.345 19.505 - 19.627: 86.3924% ( 41) 00:12:28.345 19.627 - 19.749: 86.7545% ( 36) 00:12:28.345 19.749 - 19.870: 87.3982% ( 64) 00:12:28.345 19.870 - 19.992: 88.3335% ( 93) 00:12:28.345 19.992 - 20.114: 89.1783% ( 84) 00:12:28.345 20.114 - 20.236: 89.7717% ( 59) 00:12:28.345 20.236 - 20.358: 90.2243% ( 45) 00:12:28.345 20.358 - 20.480: 90.5361% ( 31) 00:12:28.345 20.480 - 20.602: 90.7372% ( 20) 00:12:28.345 20.602 - 20.724: 90.9484% ( 21) 00:12:28.345 20.724 - 20.846: 91.0490% ( 10) 00:12:28.345 20.846 - 20.968: 91.1797% ( 13) 00:12:28.345 20.968 - 21.090: 91.3507% ( 17) 00:12:28.345 21.090 - 21.211: 91.4613% ( 11) 00:12:28.345 21.211 - 21.333: 91.6222% ( 16) 00:12:28.345 21.333 - 21.455: 91.7429% ( 12) 00:12:28.345 21.455 - 21.577: 91.8335% ( 9) 00:12:28.345 21.577 - 21.699: 91.9139% ( 8) 00:12:28.345 21.699 - 21.821: 91.9743% ( 6) 00:12:28.345 21.821 - 21.943: 92.0849% ( 11) 00:12:28.345 21.943 - 22.065: 92.2156% ( 13) 00:12:28.345 22.065 - 22.187: 92.2458% ( 3) 00:12:28.345 22.187 - 22.309: 92.3263% ( 8) 00:12:28.345 22.309 - 22.430: 92.4268% ( 10) 00:12:28.345 22.430 - 22.552: 92.4570% ( 3) 00:12:28.345 22.552 - 22.674: 92.5173% ( 6) 00:12:28.345 22.674 - 22.796: 92.5576% ( 4) 00:12:28.345 22.796 - 22.918: 92.6582% ( 10) 00:12:28.345 22.918 - 23.040: 92.7587% ( 10) 00:12:28.345 23.040 - 23.162: 92.9800% ( 22) 00:12:28.345 23.162 - 23.284: 93.2515% ( 27) 00:12:28.345 23.284 - 23.406: 93.6136% ( 36) 00:12:28.345 23.406 - 23.528: 93.9053% ( 29) 00:12:28.345 23.528 - 23.650: 94.2673% ( 36) 00:12:28.345 23.650 - 23.771: 94.4484% ( 18) 00:12:28.345 23.771 - 23.893: 94.6797% ( 23) 00:12:28.345 23.893 - 24.015: 94.9613% ( 28) 00:12:28.345 24.015 - 24.137: 95.1524% ( 19) 00:12:28.345 24.137 - 24.259: 95.3435% ( 19) 00:12:28.345 24.259 - 24.381: 95.4340% ( 9) 00:12:28.345 24.381 - 24.503: 95.5345% ( 10) 00:12:28.345 24.503 - 24.625: 95.6251% ( 9) 00:12:28.345 24.625 - 24.747: 95.7055% ( 8) 00:12:28.345 24.747 - 24.869: 95.7558% ( 5) 00:12:28.345 24.869 - 24.990: 95.8463% ( 9) 00:12:28.345 24.990 - 25.112: 95.9368% ( 9) 00:12:28.345 25.112 - 25.234: 96.0072% ( 7) 00:12:28.345 25.234 - 25.356: 96.1078% ( 10) 00:12:28.345 25.356 - 25.478: 96.2587% ( 15) 00:12:28.345 25.478 - 25.600: 96.3794% ( 12) 00:12:28.345 25.600 - 25.722: 96.4900% ( 11) 00:12:28.345 25.722 - 25.844: 96.6006% ( 11) 00:12:28.345 25.844 - 25.966: 96.7012% ( 10) 00:12:28.345 25.966 - 26.088: 96.8420% ( 14) 00:12:28.345 26.088 - 26.210: 96.9124% ( 7) 00:12:28.345 26.210 - 26.331: 96.9627% ( 5) 00:12:28.345 26.331 - 26.453: 97.0532% ( 9) 00:12:28.345 26.453 - 26.575: 97.1437% ( 9) 00:12:28.345 26.575 - 26.697: 97.2242% ( 8) 00:12:28.345 26.697 - 26.819: 97.3348% ( 11) 00:12:28.345 26.819 - 26.941: 97.4957% ( 16) 00:12:28.345 26.941 - 27.063: 97.5762% ( 8) 00:12:28.345 27.063 - 27.185: 97.6566% ( 8) 00:12:28.345 27.185 - 27.307: 97.7170% ( 6) 00:12:28.345 27.307 - 27.429: 97.8075% ( 9) 00:12:28.345 27.429 - 27.550: 97.9181% ( 11) 00:12:28.345 27.550 - 27.672: 97.9684% ( 5) 00:12:28.345 27.672 - 27.794: 98.0086% ( 4) 00:12:28.345 27.794 - 27.916: 98.0690% ( 6) 00:12:28.345 27.916 - 28.038: 98.1394% ( 7) 00:12:28.345 28.038 - 28.160: 98.2400% ( 10) 00:12:28.345 28.160 - 28.282: 98.2601% ( 2) 00:12:28.345 28.282 - 28.404: 98.3204% ( 6) 00:12:28.345 28.404 - 28.526: 98.4311% ( 11) 00:12:28.345 28.526 - 28.648: 98.4813% ( 5) 00:12:28.345 28.648 - 28.770: 98.5719% ( 9) 00:12:28.345 28.770 - 28.891: 98.6221% ( 5) 00:12:28.345 28.891 - 29.013: 98.6624% ( 4) 00:12:28.345 29.013 - 29.135: 98.7227% ( 6) 00:12:28.345 29.135 - 29.257: 98.7629% ( 4) 00:12:28.345 29.257 - 29.379: 98.8032% ( 4) 00:12:28.345 29.379 - 29.501: 98.8535% ( 5) 00:12:28.345 29.501 - 29.623: 98.8635% ( 1) 00:12:28.345 29.623 - 29.745: 98.9138% ( 5) 00:12:28.345 29.745 - 29.867: 98.9339% ( 2) 00:12:28.345 29.867 - 29.989: 98.9540% ( 2) 00:12:28.345 29.989 - 30.110: 99.0043% ( 5) 00:12:28.345 30.110 - 30.232: 99.0244% ( 2) 00:12:28.345 30.232 - 30.354: 99.0647% ( 4) 00:12:28.345 30.354 - 30.476: 99.0948% ( 3) 00:12:28.345 30.476 - 30.598: 99.1250% ( 3) 00:12:28.345 30.598 - 30.720: 99.1451% ( 2) 00:12:28.345 30.720 - 30.842: 99.1854% ( 4) 00:12:28.345 30.842 - 30.964: 99.2055% ( 2) 00:12:28.345 30.964 - 31.086: 99.2356% ( 3) 00:12:28.345 31.086 - 31.208: 99.2658% ( 3) 00:12:28.345 31.208 - 31.451: 99.3362% ( 7) 00:12:28.345 31.451 - 31.695: 99.3563% ( 2) 00:12:28.345 31.695 - 31.939: 99.3764% ( 2) 00:12:28.345 32.183 - 32.427: 99.4066% ( 3) 00:12:28.346 32.427 - 32.670: 99.4368% ( 3) 00:12:28.346 32.914 - 33.158: 99.4670% ( 3) 00:12:28.346 33.158 - 33.402: 99.4770% ( 1) 00:12:28.346 33.402 - 33.646: 99.5072% ( 3) 00:12:28.346 33.646 - 33.890: 99.5172% ( 1) 00:12:28.346 33.890 - 34.133: 99.5675% ( 5) 00:12:28.346 34.133 - 34.377: 99.5977% ( 3) 00:12:28.346 34.377 - 34.621: 99.6078% ( 1) 00:12:28.346 34.621 - 34.865: 99.6279% ( 2) 00:12:28.346 35.109 - 35.352: 99.6480% ( 2) 00:12:28.346 35.596 - 35.840: 99.6581% ( 1) 00:12:28.346 36.084 - 36.328: 99.6681% ( 1) 00:12:28.346 36.328 - 36.571: 99.6782% ( 1) 00:12:28.346 37.059 - 37.303: 99.6983% ( 2) 00:12:28.346 38.278 - 38.522: 99.7184% ( 2) 00:12:28.346 39.497 - 39.741: 99.7285% ( 1) 00:12:28.346 39.741 - 39.985: 99.7385% ( 1) 00:12:28.346 39.985 - 40.229: 99.7486% ( 1) 00:12:28.346 40.229 - 40.472: 99.7586% ( 1) 00:12:28.346 40.716 - 40.960: 99.7888% ( 3) 00:12:28.346 41.448 - 41.691: 99.7989% ( 1) 00:12:28.346 41.691 - 41.935: 99.8089% ( 1) 00:12:28.346 42.667 - 42.910: 99.8190% ( 1) 00:12:28.346 43.154 - 43.398: 99.8290% ( 1) 00:12:28.346 44.617 - 44.861: 99.8391% ( 1) 00:12:28.346 45.592 - 45.836: 99.8491% ( 1) 00:12:28.346 45.836 - 46.080: 99.8592% ( 1) 00:12:28.346 47.787 - 48.030: 99.8693% ( 1) 00:12:28.346 49.006 - 49.250: 99.8994% ( 3) 00:12:28.346 51.200 - 51.444: 99.9095% ( 1) 00:12:28.346 56.564 - 56.808: 99.9195% ( 1) 00:12:28.346 57.295 - 57.539: 99.9296% ( 1) 00:12:28.346 58.027 - 58.270: 99.9397% ( 1) 00:12:28.346 68.267 - 68.754: 99.9497% ( 1) 00:12:28.346 70.217 - 70.705: 99.9598% ( 1) 00:12:28.346 74.606 - 75.093: 99.9698% ( 1) 00:12:28.346 99.962 - 100.450: 99.9799% ( 1) 00:12:28.346 111.177 - 111.665: 99.9899% ( 1) 00:12:28.346 136.533 - 137.509: 100.0000% ( 1) 00:12:28.346 00:12:28.346 Complete histogram 00:12:28.346 ================== 00:12:28.346 Range in us Cumulative Count 00:12:28.346 8.655 - 8.716: 0.0101% ( 1) 00:12:28.346 8.716 - 8.777: 0.0905% ( 8) 00:12:28.346 8.777 - 8.838: 0.2816% ( 19) 00:12:28.346 8.838 - 8.899: 0.4526% ( 17) 00:12:28.346 8.899 - 8.960: 0.7342% ( 28) 00:12:28.346 8.960 - 9.021: 0.8951% ( 16) 00:12:28.346 9.021 - 9.082: 0.9756% ( 8) 00:12:28.346 9.082 - 9.143: 1.0460% ( 7) 00:12:28.346 9.143 - 9.204: 1.1566% ( 11) 00:12:28.346 9.204 - 9.265: 1.3980% ( 24) 00:12:28.346 9.265 - 9.326: 4.3146% ( 290) 00:12:28.346 9.326 - 9.387: 10.9524% ( 660) 00:12:28.346 9.387 - 9.448: 16.8259% ( 584) 00:12:28.346 9.448 - 9.509: 21.1807% ( 433) 00:12:28.346 9.509 - 9.570: 23.8560% ( 266) 00:12:28.346 9.570 - 9.630: 27.4666% ( 359) 00:12:28.346 9.630 - 9.691: 35.6331% ( 812) 00:12:28.346 9.691 - 9.752: 43.4879% ( 781) 00:12:28.346 9.752 - 9.813: 48.4864% ( 497) 00:12:28.346 9.813 - 9.874: 52.4590% ( 395) 00:12:28.346 9.874 - 9.935: 55.1343% ( 266) 00:12:28.346 9.935 - 9.996: 57.7592% ( 261) 00:12:28.346 9.996 - 10.057: 59.8009% ( 203) 00:12:28.346 10.057 - 10.118: 61.6313% ( 182) 00:12:28.346 10.118 - 10.179: 63.1801% ( 154) 00:12:28.346 10.179 - 10.240: 64.2261% ( 104) 00:12:28.346 10.240 - 10.301: 65.1715% ( 94) 00:12:28.346 10.301 - 10.362: 65.7548% ( 58) 00:12:28.346 10.362 - 10.423: 66.1772% ( 42) 00:12:28.346 10.423 - 10.484: 66.4689% ( 29) 00:12:28.346 10.484 - 10.545: 66.6901% ( 22) 00:12:28.346 10.545 - 10.606: 66.8611% ( 17) 00:12:28.346 10.606 - 10.667: 67.0723% ( 21) 00:12:28.346 10.667 - 10.728: 67.1930% ( 12) 00:12:28.346 10.728 - 10.789: 67.2835% ( 9) 00:12:28.346 10.789 - 10.850: 67.3841% ( 10) 00:12:28.346 10.850 - 10.910: 67.4444% ( 6) 00:12:28.346 10.910 - 10.971: 67.5148% ( 7) 00:12:28.346 10.971 - 11.032: 67.5752% ( 6) 00:12:28.346 11.032 - 11.093: 67.6255% ( 5) 00:12:28.346 11.093 - 11.154: 67.6556% ( 3) 00:12:28.346 11.154 - 11.215: 67.6858% ( 3) 00:12:28.346 11.215 - 11.276: 67.7562% ( 7) 00:12:28.346 11.276 - 11.337: 67.8166% ( 6) 00:12:28.346 11.337 - 11.398: 67.8568% ( 4) 00:12:28.346 11.398 - 11.459: 67.8668% ( 1) 00:12:28.346 11.459 - 11.520: 67.9171% ( 5) 00:12:28.346 11.520 - 11.581: 67.9674% ( 5) 00:12:28.346 11.642 - 11.703: 67.9976% ( 3) 00:12:28.346 11.703 - 11.764: 68.0378% ( 4) 00:12:28.346 11.764 - 11.825: 68.0479% ( 1) 00:12:28.346 11.825 - 11.886: 68.0881% ( 4) 00:12:28.346 11.886 - 11.947: 68.1183% ( 3) 00:12:28.346 11.947 - 12.008: 68.1283% ( 1) 00:12:28.346 12.008 - 12.069: 68.1384% ( 1) 00:12:28.346 12.130 - 12.190: 68.1585% ( 2) 00:12:28.346 12.190 - 12.251: 68.1887% ( 3) 00:12:28.346 12.251 - 12.312: 68.2088% ( 2) 00:12:28.346 12.312 - 12.373: 68.2390% ( 3) 00:12:28.346 12.434 - 12.495: 68.2591% ( 2) 00:12:28.346 12.556 - 12.617: 68.3496% ( 9) 00:12:28.346 12.617 - 12.678: 68.7821% ( 43) 00:12:28.346 12.678 - 12.739: 69.7878% ( 100) 00:12:28.346 12.739 - 12.800: 71.1154% ( 132) 00:12:28.346 12.800 - 12.861: 72.0406% ( 92) 00:12:28.346 12.861 - 12.922: 72.6541% ( 61) 00:12:28.346 12.922 - 12.983: 73.0967% ( 44) 00:12:28.346 12.983 - 13.044: 73.4084% ( 31) 00:12:28.346 13.044 - 13.105: 74.5248% ( 111) 00:12:28.346 13.105 - 13.166: 77.4816% ( 294) 00:12:28.346 13.166 - 13.227: 79.9759% ( 248) 00:12:28.346 13.227 - 13.288: 82.3494% ( 236) 00:12:28.346 13.288 - 13.349: 84.4514% ( 209) 00:12:28.346 13.349 - 13.410: 85.9600% ( 150) 00:12:28.346 13.410 - 13.470: 87.0562% ( 109) 00:12:28.346 13.470 - 13.531: 87.8306% ( 77) 00:12:28.346 13.531 - 13.592: 88.5246% ( 69) 00:12:28.346 13.592 - 13.653: 89.0174% ( 49) 00:12:28.346 13.653 - 13.714: 89.3593% ( 34) 00:12:28.346 13.714 - 13.775: 89.7616% ( 40) 00:12:28.346 13.775 - 13.836: 90.0634% ( 30) 00:12:28.346 13.836 - 13.897: 90.3047% ( 24) 00:12:28.346 13.897 - 13.958: 90.5361% ( 23) 00:12:28.346 13.958 - 14.019: 90.7372% ( 20) 00:12:28.346 14.019 - 14.080: 90.8378% ( 10) 00:12:28.346 14.080 - 14.141: 90.9786% ( 14) 00:12:28.346 14.141 - 14.202: 91.0792% ( 10) 00:12:28.346 14.202 - 14.263: 91.2099% ( 13) 00:12:28.346 14.263 - 14.324: 91.3608% ( 15) 00:12:28.346 14.324 - 14.385: 91.4915% ( 13) 00:12:28.346 14.385 - 14.446: 91.6122% ( 12) 00:12:28.346 14.446 - 14.507: 91.7228% ( 11) 00:12:28.346 14.507 - 14.568: 91.8033% ( 8) 00:12:28.346 14.568 - 14.629: 91.8636% ( 6) 00:12:28.346 14.629 - 14.690: 91.9240% ( 6) 00:12:28.346 14.690 - 14.750: 92.0547% ( 13) 00:12:28.346 14.750 - 14.811: 92.1251% ( 7) 00:12:28.347 14.811 - 14.872: 92.2156% ( 9) 00:12:28.347 14.872 - 14.933: 92.3363% ( 12) 00:12:28.347 14.933 - 14.994: 92.4168% ( 8) 00:12:28.347 14.994 - 15.055: 92.4671% ( 5) 00:12:28.347 15.055 - 15.116: 92.5475% ( 8) 00:12:28.347 15.116 - 15.177: 92.5777% ( 3) 00:12:28.347 15.177 - 15.238: 92.6380% ( 6) 00:12:28.347 15.238 - 15.299: 92.6783% ( 4) 00:12:28.347 15.299 - 15.360: 92.7084% ( 3) 00:12:28.347 15.360 - 15.421: 92.7788% ( 7) 00:12:28.347 15.421 - 15.482: 92.8492% ( 7) 00:12:28.347 15.482 - 15.543: 92.9096% ( 6) 00:12:28.347 15.543 - 15.604: 92.9498% ( 4) 00:12:28.347 15.604 - 15.726: 93.1007% ( 15) 00:12:28.347 15.726 - 15.848: 93.3018% ( 20) 00:12:28.347 15.848 - 15.970: 93.6237% ( 32) 00:12:28.347 15.970 - 16.091: 93.9958% ( 37) 00:12:28.347 16.091 - 16.213: 94.4182% ( 42) 00:12:28.347 16.213 - 16.335: 94.7199% ( 30) 00:12:28.347 16.335 - 16.457: 94.9110% ( 19) 00:12:28.347 16.457 - 16.579: 95.1524% ( 24) 00:12:28.347 16.579 - 16.701: 95.4038% ( 25) 00:12:28.347 16.701 - 16.823: 95.6351% ( 23) 00:12:28.347 16.823 - 16.945: 95.8262% ( 19) 00:12:28.347 16.945 - 17.067: 95.9972% ( 17) 00:12:28.347 17.067 - 17.189: 96.1179% ( 12) 00:12:28.347 17.189 - 17.310: 96.2285% ( 11) 00:12:28.347 17.310 - 17.432: 96.3492% ( 12) 00:12:28.347 17.432 - 17.554: 96.5101% ( 16) 00:12:28.347 17.554 - 17.676: 96.6107% ( 10) 00:12:28.347 17.676 - 17.798: 96.7213% ( 11) 00:12:28.347 17.798 - 17.920: 96.8521% ( 13) 00:12:28.347 17.920 - 18.042: 96.9627% ( 11) 00:12:28.347 18.042 - 18.164: 97.0431% ( 8) 00:12:28.347 18.164 - 18.286: 97.1638% ( 12) 00:12:28.347 18.286 - 18.408: 97.2342% ( 7) 00:12:28.347 18.408 - 18.530: 97.2745% ( 4) 00:12:28.347 18.530 - 18.651: 97.3750% ( 10) 00:12:28.347 18.651 - 18.773: 97.4052% ( 3) 00:12:28.347 18.773 - 18.895: 97.5158% ( 11) 00:12:28.347 18.895 - 19.017: 97.5862% ( 7) 00:12:28.347 19.017 - 19.139: 97.6365% ( 5) 00:12:28.347 19.139 - 19.261: 97.7371% ( 10) 00:12:28.347 19.261 - 19.383: 97.7773% ( 4) 00:12:28.347 19.383 - 19.505: 97.8276% ( 5) 00:12:28.347 19.505 - 19.627: 97.8678% ( 4) 00:12:28.347 19.627 - 19.749: 97.8880% ( 2) 00:12:28.347 19.749 - 19.870: 97.9483% ( 6) 00:12:28.347 19.870 - 19.992: 97.9885% ( 4) 00:12:28.347 19.992 - 20.114: 98.0690% ( 8) 00:12:28.347 20.114 - 20.236: 98.0891% ( 2) 00:12:28.347 20.236 - 20.358: 98.1193% ( 3) 00:12:28.347 20.358 - 20.480: 98.1796% ( 6) 00:12:28.347 20.480 - 20.602: 98.2903% ( 11) 00:12:28.347 20.602 - 20.724: 98.4109% ( 12) 00:12:28.347 20.724 - 20.846: 98.4813% ( 7) 00:12:28.347 20.846 - 20.968: 98.6020% ( 12) 00:12:28.347 20.968 - 21.090: 98.6423% ( 4) 00:12:28.347 21.090 - 21.211: 98.6724% ( 3) 00:12:28.347 21.211 - 21.333: 98.7127% ( 4) 00:12:28.347 21.333 - 21.455: 98.7529% ( 4) 00:12:28.347 21.455 - 21.577: 98.7931% ( 4) 00:12:28.347 21.577 - 21.699: 98.8434% ( 5) 00:12:28.347 21.699 - 21.821: 98.8535% ( 1) 00:12:28.347 21.821 - 21.943: 98.8736% ( 2) 00:12:28.347 21.943 - 22.065: 98.9038% ( 3) 00:12:28.347 22.065 - 22.187: 98.9440% ( 4) 00:12:28.347 22.187 - 22.309: 98.9641% ( 2) 00:12:28.347 22.309 - 22.430: 99.0043% ( 4) 00:12:28.347 22.430 - 22.552: 99.0345% ( 3) 00:12:28.347 22.552 - 22.674: 99.0446% ( 1) 00:12:28.347 22.674 - 22.796: 99.0647% ( 2) 00:12:28.347 22.796 - 22.918: 99.0948% ( 3) 00:12:28.347 22.918 - 23.040: 99.1150% ( 2) 00:12:28.347 23.162 - 23.284: 99.1351% ( 2) 00:12:28.347 23.284 - 23.406: 99.1552% ( 2) 00:12:28.347 23.406 - 23.528: 99.1753% ( 2) 00:12:28.347 23.528 - 23.650: 99.1954% ( 2) 00:12:28.347 23.650 - 23.771: 99.2055% ( 1) 00:12:28.347 23.771 - 23.893: 99.2155% ( 1) 00:12:28.347 23.893 - 24.015: 99.2356% ( 2) 00:12:28.347 24.015 - 24.137: 99.2759% ( 4) 00:12:28.347 24.137 - 24.259: 99.2960% ( 2) 00:12:28.347 24.381 - 24.503: 99.3161% ( 2) 00:12:28.347 24.503 - 24.625: 99.3362% ( 2) 00:12:28.347 24.625 - 24.747: 99.3463% ( 1) 00:12:28.347 24.747 - 24.869: 99.3563% ( 1) 00:12:28.347 24.869 - 24.990: 99.3764% ( 2) 00:12:28.347 25.234 - 25.356: 99.3966% ( 2) 00:12:28.347 25.356 - 25.478: 99.4066% ( 1) 00:12:28.347 25.600 - 25.722: 99.4267% ( 2) 00:12:28.347 25.722 - 25.844: 99.4368% ( 1) 00:12:28.347 25.844 - 25.966: 99.4468% ( 1) 00:12:28.347 25.966 - 26.088: 99.4670% ( 2) 00:12:28.347 26.088 - 26.210: 99.5072% ( 4) 00:12:28.347 26.210 - 26.331: 99.5374% ( 3) 00:12:28.347 26.331 - 26.453: 99.5474% ( 1) 00:12:28.347 26.453 - 26.575: 99.5776% ( 3) 00:12:28.347 26.575 - 26.697: 99.5876% ( 1) 00:12:28.347 26.819 - 26.941: 99.5977% ( 1) 00:12:28.347 27.307 - 27.429: 99.6078% ( 1) 00:12:28.347 27.794 - 27.916: 99.6178% ( 1) 00:12:28.347 28.282 - 28.404: 99.6279% ( 1) 00:12:28.347 28.891 - 29.013: 99.6581% ( 3) 00:12:28.347 29.135 - 29.257: 99.6681% ( 1) 00:12:28.347 29.501 - 29.623: 99.6782% ( 1) 00:12:28.347 30.964 - 31.086: 99.6983% ( 2) 00:12:28.347 31.695 - 31.939: 99.7083% ( 1) 00:12:28.347 31.939 - 32.183: 99.7184% ( 1) 00:12:28.347 32.427 - 32.670: 99.7285% ( 1) 00:12:28.347 32.914 - 33.158: 99.7385% ( 1) 00:12:28.347 33.402 - 33.646: 99.7486% ( 1) 00:12:28.347 33.646 - 33.890: 99.7687% ( 2) 00:12:28.347 34.133 - 34.377: 99.7787% ( 1) 00:12:28.347 35.352 - 35.596: 99.7989% ( 2) 00:12:28.347 36.328 - 36.571: 99.8089% ( 1) 00:12:28.347 36.815 - 37.059: 99.8190% ( 1) 00:12:28.347 37.303 - 37.547: 99.8290% ( 1) 00:12:28.347 38.034 - 38.278: 99.8391% ( 1) 00:12:28.347 38.278 - 38.522: 99.8491% ( 1) 00:12:28.347 39.010 - 39.253: 99.8592% ( 1) 00:12:28.347 39.253 - 39.497: 99.8693% ( 1) 00:12:28.347 39.741 - 39.985: 99.8793% ( 1) 00:12:28.347 40.716 - 40.960: 99.8894% ( 1) 00:12:28.347 40.960 - 41.204: 99.8994% ( 1) 00:12:28.347 46.811 - 47.055: 99.9095% ( 1) 00:12:28.347 48.274 - 48.518: 99.9195% ( 1) 00:12:28.347 48.518 - 48.762: 99.9296% ( 1) 00:12:28.347 49.250 - 49.493: 99.9397% ( 1) 00:12:28.347 49.737 - 49.981: 99.9497% ( 1) 00:12:28.347 50.469 - 50.712: 99.9598% ( 1) 00:12:28.347 56.808 - 57.051: 99.9698% ( 1) 00:12:28.347 59.733 - 59.977: 99.9799% ( 1) 00:12:28.347 96.549 - 97.036: 99.9899% ( 1) 00:12:28.347 133.608 - 134.583: 100.0000% ( 1) 00:12:28.347 00:12:28.347 ************************************ 00:12:28.347 END TEST nvme_overhead 00:12:28.347 ************************************ 00:12:28.347 00:12:28.347 real 0m1.380s 00:12:28.347 user 0m1.131s 00:12:28.347 sys 0m0.185s 00:12:28.347 11:26:33 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.347 11:26:33 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:12:28.347 11:26:33 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:28.348 11:26:33 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:28.348 11:26:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.348 11:26:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.348 ************************************ 00:12:28.348 START TEST nvme_arbitration 00:12:28.348 ************************************ 00:12:28.348 11:26:33 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:32.554 Initializing NVMe Controllers 00:12:32.554 Attached to 0000:00:10.0 00:12:32.554 Attached to 0000:00:11.0 00:12:32.554 Attached to 0000:00:13.0 00:12:32.554 Attached to 0000:00:12.0 00:12:32.554 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:12:32.554 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:12:32.554 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:12:32.554 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:12:32.554 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:12:32.554 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:12:32.554 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:12:32.554 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:12:32.554 Initialization complete. Launching workers. 00:12:32.554 Starting thread on core 1 with urgent priority queue 00:12:32.554 Starting thread on core 2 with urgent priority queue 00:12:32.554 Starting thread on core 3 with urgent priority queue 00:12:32.554 Starting thread on core 0 with urgent priority queue 00:12:32.554 QEMU NVMe Ctrl (12340 ) core 0: 490.67 IO/s 203.80 secs/100000 ios 00:12:32.554 QEMU NVMe Ctrl (12342 ) core 0: 490.67 IO/s 203.80 secs/100000 ios 00:12:32.554 QEMU NVMe Ctrl (12341 ) core 1: 448.00 IO/s 223.21 secs/100000 ios 00:12:32.554 QEMU NVMe Ctrl (12342 ) core 1: 448.00 IO/s 223.21 secs/100000 ios 00:12:32.554 QEMU NVMe Ctrl (12343 ) core 2: 512.00 IO/s 195.31 secs/100000 ios 00:12:32.554 QEMU NVMe Ctrl (12342 ) core 3: 490.67 IO/s 203.80 secs/100000 ios 00:12:32.554 ======================================================== 00:12:32.554 00:12:32.554 00:12:32.554 real 0m3.566s 00:12:32.554 user 0m9.590s 00:12:32.554 sys 0m0.197s 00:12:32.554 11:26:37 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.554 11:26:37 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:12:32.554 ************************************ 00:12:32.554 END TEST nvme_arbitration 00:12:32.554 ************************************ 00:12:32.554 11:26:37 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:12:32.554 11:26:37 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:32.554 11:26:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.554 11:26:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:32.554 ************************************ 00:12:32.554 START TEST nvme_single_aen 00:12:32.554 ************************************ 00:12:32.554 11:26:37 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:12:32.554 Asynchronous Event Request test 00:12:32.554 Attached to 0000:00:10.0 00:12:32.554 Attached to 0000:00:11.0 00:12:32.554 Attached to 0000:00:13.0 00:12:32.555 Attached to 0000:00:12.0 00:12:32.555 Reset controller to setup AER completions for this process 00:12:32.555 Registering asynchronous event callbacks... 00:12:32.555 Getting orig temperature thresholds of all controllers 00:12:32.555 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:32.555 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:32.555 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:32.555 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:32.555 Setting all controllers temperature threshold low to trigger AER 00:12:32.555 Waiting for all controllers temperature threshold to be set lower 00:12:32.555 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:32.555 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:32.555 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:32.555 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:32.555 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:32.555 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:32.555 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:32.555 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:32.555 Waiting for all controllers to trigger AER and reset threshold 00:12:32.555 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:32.555 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:32.555 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:32.555 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:32.555 Cleaning up... 00:12:32.555 00:12:32.555 real 0m0.303s 00:12:32.555 user 0m0.113s 00:12:32.555 sys 0m0.139s 00:12:32.555 11:26:37 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.555 11:26:37 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:12:32.555 ************************************ 00:12:32.555 END TEST nvme_single_aen 00:12:32.555 ************************************ 00:12:32.555 11:26:37 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:12:32.555 11:26:37 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:32.555 11:26:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.555 11:26:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:32.555 ************************************ 00:12:32.555 START TEST nvme_doorbell_aers 00:12:32.555 ************************************ 00:12:32.555 11:26:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:12:32.555 11:26:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:12:32.555 11:26:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:12:32.555 11:26:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:12:32.555 11:26:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:12:32.555 11:26:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:32.555 11:26:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:12:32.555 11:26:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:32.555 11:26:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:32.555 11:26:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:32.555 11:26:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:32.555 11:26:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:32.555 11:26:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:32.555 11:26:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:32.813 [2024-11-20 11:26:38.373692] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65135) is not found. Dropping the request. 00:12:42.858 Executing: test_write_invalid_db 00:12:42.858 Waiting for AER completion... 00:12:42.858 Failure: test_write_invalid_db 00:12:42.858 00:12:42.858 Executing: test_invalid_db_write_overflow_sq 00:12:42.858 Waiting for AER completion... 00:12:42.858 Failure: test_invalid_db_write_overflow_sq 00:12:42.858 00:12:42.858 Executing: test_invalid_db_write_overflow_cq 00:12:42.858 Waiting for AER completion... 00:12:42.858 Failure: test_invalid_db_write_overflow_cq 00:12:42.858 00:12:42.858 11:26:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:42.858 11:26:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:42.858 [2024-11-20 11:26:48.424496] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65135) is not found. Dropping the request. 00:12:52.828 Executing: test_write_invalid_db 00:12:52.828 Waiting for AER completion... 00:12:52.828 Failure: test_write_invalid_db 00:12:52.828 00:12:52.828 Executing: test_invalid_db_write_overflow_sq 00:12:52.828 Waiting for AER completion... 00:12:52.828 Failure: test_invalid_db_write_overflow_sq 00:12:52.828 00:12:52.828 Executing: test_invalid_db_write_overflow_cq 00:12:52.828 Waiting for AER completion... 00:12:52.828 Failure: test_invalid_db_write_overflow_cq 00:12:52.828 00:12:52.828 11:26:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:52.828 11:26:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:52.828 [2024-11-20 11:26:58.435089] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65135) is not found. Dropping the request. 00:13:02.798 Executing: test_write_invalid_db 00:13:02.798 Waiting for AER completion... 00:13:02.798 Failure: test_write_invalid_db 00:13:02.798 00:13:02.798 Executing: test_invalid_db_write_overflow_sq 00:13:02.798 Waiting for AER completion... 00:13:02.798 Failure: test_invalid_db_write_overflow_sq 00:13:02.798 00:13:02.798 Executing: test_invalid_db_write_overflow_cq 00:13:02.798 Waiting for AER completion... 00:13:02.798 Failure: test_invalid_db_write_overflow_cq 00:13:02.798 00:13:02.798 11:27:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:13:02.798 11:27:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:02.798 [2024-11-20 11:27:08.522330] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65135) is not found. Dropping the request. 00:13:12.771 Executing: test_write_invalid_db 00:13:12.771 Waiting for AER completion... 00:13:12.771 Failure: test_write_invalid_db 00:13:12.771 00:13:12.771 Executing: test_invalid_db_write_overflow_sq 00:13:12.771 Waiting for AER completion... 00:13:12.771 Failure: test_invalid_db_write_overflow_sq 00:13:12.771 00:13:12.771 Executing: test_invalid_db_write_overflow_cq 00:13:12.771 Waiting for AER completion... 00:13:12.771 Failure: test_invalid_db_write_overflow_cq 00:13:12.771 00:13:12.771 ************************************ 00:13:12.771 END TEST nvme_doorbell_aers 00:13:12.771 ************************************ 00:13:12.771 00:13:12.771 real 0m40.295s 00:13:12.771 user 0m28.200s 00:13:12.771 sys 0m11.681s 00:13:12.771 11:27:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.771 11:27:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:13:12.771 11:27:18 nvme -- nvme/nvme.sh@97 -- # uname 00:13:12.771 11:27:18 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:13:12.771 11:27:18 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:13:12.771 11:27:18 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:13:12.771 11:27:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.771 11:27:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:12.771 ************************************ 00:13:12.771 START TEST nvme_multi_aen 00:13:12.771 ************************************ 00:13:12.771 11:27:18 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:13:13.030 [2024-11-20 11:27:18.667890] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65135) is not found. Dropping the request. 00:13:13.030 [2024-11-20 11:27:18.668052] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65135) is not found. Dropping the request. 00:13:13.030 [2024-11-20 11:27:18.668090] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65135) is not found. Dropping the request. 00:13:13.030 [2024-11-20 11:27:18.670829] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65135) is not found. Dropping the request. 00:13:13.030 [2024-11-20 11:27:18.671156] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65135) is not found. Dropping the request. 00:13:13.030 [2024-11-20 11:27:18.671192] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65135) is not found. Dropping the request. 00:13:13.030 [2024-11-20 11:27:18.673363] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65135) is not found. Dropping the request. 00:13:13.030 [2024-11-20 11:27:18.673454] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65135) is not found. Dropping the request. 00:13:13.030 [2024-11-20 11:27:18.673495] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65135) is not found. Dropping the request. 00:13:13.030 [2024-11-20 11:27:18.675864] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65135) is not found. Dropping the request. 00:13:13.030 [2024-11-20 11:27:18.675938] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65135) is not found. Dropping the request. 00:13:13.030 [2024-11-20 11:27:18.675964] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65135) is not found. Dropping the request. 00:13:13.030 Child process pid: 65647 00:13:13.288 [Child] Asynchronous Event Request test 00:13:13.288 [Child] Attached to 0000:00:10.0 00:13:13.288 [Child] Attached to 0000:00:11.0 00:13:13.288 [Child] Attached to 0000:00:13.0 00:13:13.288 [Child] Attached to 0000:00:12.0 00:13:13.288 [Child] Registering asynchronous event callbacks... 00:13:13.288 [Child] Getting orig temperature thresholds of all controllers 00:13:13.288 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:13.288 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:13.288 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:13.288 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:13.288 [Child] Waiting for all controllers to trigger AER and reset threshold 00:13:13.288 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:13.288 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:13.288 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:13.288 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:13.288 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:13.288 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:13.288 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:13.288 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:13.288 [Child] Cleaning up... 00:13:13.547 Asynchronous Event Request test 00:13:13.547 Attached to 0000:00:10.0 00:13:13.547 Attached to 0000:00:11.0 00:13:13.547 Attached to 0000:00:13.0 00:13:13.547 Attached to 0000:00:12.0 00:13:13.547 Reset controller to setup AER completions for this process 00:13:13.547 Registering asynchronous event callbacks... 00:13:13.547 Getting orig temperature thresholds of all controllers 00:13:13.547 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:13.547 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:13.547 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:13.547 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:13.547 Setting all controllers temperature threshold low to trigger AER 00:13:13.547 Waiting for all controllers temperature threshold to be set lower 00:13:13.547 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:13.547 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:13:13.547 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:13.547 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:13:13.547 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:13.547 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:13:13.547 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:13.547 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:13:13.547 Waiting for all controllers to trigger AER and reset threshold 00:13:13.547 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:13.547 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:13.547 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:13.547 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:13.547 Cleaning up... 00:13:13.547 ************************************ 00:13:13.547 END TEST nvme_multi_aen 00:13:13.547 ************************************ 00:13:13.547 00:13:13.547 real 0m0.820s 00:13:13.547 user 0m0.346s 00:13:13.547 sys 0m0.362s 00:13:13.547 11:27:19 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.547 11:27:19 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:13:13.547 11:27:19 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:13:13.547 11:27:19 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:13.547 11:27:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.547 11:27:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.547 ************************************ 00:13:13.547 START TEST nvme_startup 00:13:13.547 ************************************ 00:13:13.547 11:27:19 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:13:14.113 Initializing NVMe Controllers 00:13:14.113 Attached to 0000:00:10.0 00:13:14.113 Attached to 0000:00:11.0 00:13:14.113 Attached to 0000:00:13.0 00:13:14.113 Attached to 0000:00:12.0 00:13:14.113 Initialization complete. 00:13:14.113 Time used:299859.781 (us). 00:13:14.113 ************************************ 00:13:14.113 END TEST nvme_startup 00:13:14.113 ************************************ 00:13:14.113 00:13:14.113 real 0m0.433s 00:13:14.113 user 0m0.167s 00:13:14.113 sys 0m0.213s 00:13:14.113 11:27:19 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.113 11:27:19 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:13:14.113 11:27:19 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:13:14.113 11:27:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:14.113 11:27:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.113 11:27:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.113 ************************************ 00:13:14.113 START TEST nvme_multi_secondary 00:13:14.113 ************************************ 00:13:14.113 11:27:19 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:13:14.113 11:27:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65713 00:13:14.113 11:27:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:13:14.113 11:27:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65714 00:13:14.113 11:27:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:13:14.113 11:27:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:13:17.397 Initializing NVMe Controllers 00:13:17.397 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:17.397 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:17.397 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:17.397 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:17.397 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:13:17.397 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:13:17.397 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:13:17.397 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:13:17.397 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:13:17.397 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:13:17.397 Initialization complete. Launching workers. 00:13:17.397 ======================================================== 00:13:17.397 Latency(us) 00:13:17.397 Device Information : IOPS MiB/s Average min max 00:13:17.397 PCIE (0000:00:10.0) NSID 1 from core 1: 4317.34 16.86 3703.98 1548.12 9727.77 00:13:17.397 PCIE (0000:00:11.0) NSID 1 from core 1: 4317.34 16.86 3705.94 1470.84 10045.22 00:13:17.397 PCIE (0000:00:13.0) NSID 1 from core 1: 4317.34 16.86 3706.09 1511.10 9363.44 00:13:17.397 PCIE (0000:00:12.0) NSID 1 from core 1: 4317.34 16.86 3706.18 1531.96 9491.16 00:13:17.397 PCIE (0000:00:12.0) NSID 2 from core 1: 4317.34 16.86 3706.41 1549.04 9475.28 00:13:17.397 PCIE (0000:00:12.0) NSID 3 from core 1: 4317.34 16.86 3706.61 1566.30 9450.43 00:13:17.397 ======================================================== 00:13:17.397 Total : 25904.02 101.19 3705.87 1470.84 10045.22 00:13:17.397 00:13:17.654 Initializing NVMe Controllers 00:13:17.654 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:17.654 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:17.654 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:17.654 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:17.654 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:13:17.654 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:13:17.654 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:13:17.654 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:13:17.654 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:13:17.654 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:13:17.654 Initialization complete. Launching workers. 00:13:17.654 ======================================================== 00:13:17.654 Latency(us) 00:13:17.654 Device Information : IOPS MiB/s Average min max 00:13:17.654 PCIE (0000:00:10.0) NSID 1 from core 2: 1917.65 7.49 8340.56 2123.49 25238.31 00:13:17.654 PCIE (0000:00:11.0) NSID 1 from core 2: 1917.65 7.49 8342.78 1940.48 21394.52 00:13:17.654 PCIE (0000:00:13.0) NSID 1 from core 2: 1917.65 7.49 8342.83 2192.34 21570.86 00:13:17.654 PCIE (0000:00:12.0) NSID 1 from core 2: 1917.65 7.49 8335.63 2223.64 21652.28 00:13:17.654 PCIE (0000:00:12.0) NSID 2 from core 2: 1917.65 7.49 8331.25 2081.63 23986.89 00:13:17.654 PCIE (0000:00:12.0) NSID 3 from core 2: 1917.65 7.49 8342.57 2159.91 24447.76 00:13:17.654 ======================================================== 00:13:17.654 Total : 11505.88 44.94 8339.27 1940.48 25238.31 00:13:17.655 00:13:17.912 11:27:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65713 00:13:19.285 Initializing NVMe Controllers 00:13:19.285 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:19.285 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:19.285 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:19.285 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:19.285 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:19.285 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:19.285 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:19.285 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:19.285 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:19.285 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:19.285 Initialization complete. Launching workers. 00:13:19.285 ======================================================== 00:13:19.285 Latency(us) 00:13:19.285 Device Information : IOPS MiB/s Average min max 00:13:19.285 PCIE (0000:00:10.0) NSID 1 from core 0: 6478.81 25.31 2467.86 981.76 10184.62 00:13:19.285 PCIE (0000:00:11.0) NSID 1 from core 0: 6479.01 25.31 2469.08 979.50 9257.06 00:13:19.285 PCIE (0000:00:13.0) NSID 1 from core 0: 6479.01 25.31 2469.07 988.78 9437.27 00:13:19.285 PCIE (0000:00:12.0) NSID 1 from core 0: 6479.01 25.31 2469.04 1009.74 10268.00 00:13:19.285 PCIE (0000:00:12.0) NSID 2 from core 0: 6479.01 25.31 2469.04 999.42 10251.06 00:13:19.285 PCIE (0000:00:12.0) NSID 3 from core 0: 6479.01 25.31 2469.01 991.66 9420.33 00:13:19.285 ======================================================== 00:13:19.285 Total : 38873.84 151.85 2468.85 979.50 10268.00 00:13:19.285 00:13:19.285 11:27:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65714 00:13:19.285 11:27:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:13:19.285 11:27:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65779 00:13:19.285 11:27:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65780 00:13:19.285 11:27:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:13:19.286 11:27:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:13:22.563 Initializing NVMe Controllers 00:13:22.563 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:22.563 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:22.563 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:22.563 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:22.563 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:22.563 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:22.563 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:22.563 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:22.563 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:22.563 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:22.563 Initialization complete. Launching workers. 00:13:22.563 ======================================================== 00:13:22.564 Latency(us) 00:13:22.564 Device Information : IOPS MiB/s Average min max 00:13:22.564 PCIE (0000:00:10.0) NSID 1 from core 0: 5351.12 20.90 2988.33 936.84 9264.29 00:13:22.564 PCIE (0000:00:11.0) NSID 1 from core 0: 5351.12 20.90 2990.06 969.83 10077.74 00:13:22.564 PCIE (0000:00:13.0) NSID 1 from core 0: 5351.12 20.90 2990.09 954.40 10534.46 00:13:22.564 PCIE (0000:00:12.0) NSID 1 from core 0: 5351.12 20.90 2990.32 944.26 11485.85 00:13:22.564 PCIE (0000:00:12.0) NSID 2 from core 0: 5351.12 20.90 2990.29 943.54 11360.12 00:13:22.564 PCIE (0000:00:12.0) NSID 3 from core 0: 5351.12 20.90 2990.39 948.02 11288.91 00:13:22.564 ======================================================== 00:13:22.564 Total : 32106.73 125.42 2989.91 936.84 11485.85 00:13:22.564 00:13:23.129 Initializing NVMe Controllers 00:13:23.129 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:23.129 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:23.129 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:23.129 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:23.129 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:13:23.129 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:13:23.129 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:13:23.129 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:13:23.129 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:13:23.129 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:13:23.129 Initialization complete. Launching workers. 00:13:23.129 ======================================================== 00:13:23.129 Latency(us) 00:13:23.129 Device Information : IOPS MiB/s Average min max 00:13:23.129 PCIE (0000:00:10.0) NSID 1 from core 1: 5023.36 19.62 3183.30 1061.10 9243.85 00:13:23.129 PCIE (0000:00:11.0) NSID 1 from core 1: 5023.36 19.62 3184.47 1086.91 9101.02 00:13:23.129 PCIE (0000:00:13.0) NSID 1 from core 1: 5023.36 19.62 3184.32 1052.12 8685.01 00:13:23.129 PCIE (0000:00:12.0) NSID 1 from core 1: 5023.36 19.62 3184.18 1073.74 8471.54 00:13:23.129 PCIE (0000:00:12.0) NSID 2 from core 1: 5023.36 19.62 3184.03 1043.60 8764.15 00:13:23.129 PCIE (0000:00:12.0) NSID 3 from core 1: 5023.36 19.62 3183.96 982.12 9383.92 00:13:23.129 ======================================================== 00:13:23.129 Total : 30140.18 117.74 3184.04 982.12 9383.92 00:13:23.129 00:13:25.029 Initializing NVMe Controllers 00:13:25.029 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:25.029 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:25.029 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:25.029 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:25.029 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:13:25.029 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:13:25.029 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:13:25.029 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:13:25.029 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:13:25.029 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:13:25.029 Initialization complete. Launching workers. 00:13:25.029 ======================================================== 00:13:25.029 Latency(us) 00:13:25.029 Device Information : IOPS MiB/s Average min max 00:13:25.029 PCIE (0000:00:10.0) NSID 1 from core 2: 3110.93 12.15 5140.27 1217.81 20433.37 00:13:25.029 PCIE (0000:00:11.0) NSID 1 from core 2: 3110.93 12.15 5142.25 1189.79 20027.08 00:13:25.029 PCIE (0000:00:13.0) NSID 1 from core 2: 3110.93 12.15 5142.36 1254.02 23345.38 00:13:25.029 PCIE (0000:00:12.0) NSID 1 from core 2: 3110.93 12.15 5141.73 1197.21 19063.72 00:13:25.029 PCIE (0000:00:12.0) NSID 2 from core 2: 3110.93 12.15 5141.85 1046.19 18572.75 00:13:25.029 PCIE (0000:00:12.0) NSID 3 from core 2: 3110.93 12.15 5141.69 997.75 18711.32 00:13:25.029 ======================================================== 00:13:25.030 Total : 18665.60 72.91 5141.69 997.75 23345.38 00:13:25.030 00:13:25.030 ************************************ 00:13:25.030 END TEST nvme_multi_secondary 00:13:25.030 ************************************ 00:13:25.030 11:27:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65779 00:13:25.030 11:27:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65780 00:13:25.030 00:13:25.030 real 0m10.991s 00:13:25.030 user 0m18.781s 00:13:25.030 sys 0m1.146s 00:13:25.030 11:27:30 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.030 11:27:30 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:13:25.030 11:27:30 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:13:25.030 11:27:30 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:13:25.030 11:27:30 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64703 ]] 00:13:25.030 11:27:30 nvme -- common/autotest_common.sh@1094 -- # kill 64703 00:13:25.030 11:27:30 nvme -- common/autotest_common.sh@1095 -- # wait 64703 00:13:25.030 [2024-11-20 11:27:30.674359] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65646) is not found. Dropping the request. 00:13:25.030 [2024-11-20 11:27:30.674691] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65646) is not found. Dropping the request. 00:13:25.030 [2024-11-20 11:27:30.674746] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65646) is not found. Dropping the request. 00:13:25.030 [2024-11-20 11:27:30.674781] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65646) is not found. Dropping the request. 00:13:25.030 [2024-11-20 11:27:30.677348] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65646) is not found. Dropping the request. 00:13:25.030 [2024-11-20 11:27:30.677456] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65646) is not found. Dropping the request. 00:13:25.030 [2024-11-20 11:27:30.677507] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65646) is not found. Dropping the request. 00:13:25.030 [2024-11-20 11:27:30.677537] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65646) is not found. Dropping the request. 00:13:25.030 [2024-11-20 11:27:30.680090] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65646) is not found. Dropping the request. 00:13:25.030 [2024-11-20 11:27:30.680164] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65646) is not found. Dropping the request. 00:13:25.030 [2024-11-20 11:27:30.680190] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65646) is not found. Dropping the request. 00:13:25.030 [2024-11-20 11:27:30.680218] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65646) is not found. Dropping the request. 00:13:25.030 [2024-11-20 11:27:30.682780] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65646) is not found. Dropping the request. 00:13:25.030 [2024-11-20 11:27:30.682866] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65646) is not found. Dropping the request. 00:13:25.030 [2024-11-20 11:27:30.682898] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65646) is not found. Dropping the request. 00:13:25.030 [2024-11-20 11:27:30.682931] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65646) is not found. Dropping the request. 00:13:25.289 [2024-11-20 11:27:30.893017] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:13:25.289 11:27:30 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:13:25.289 11:27:30 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:13:25.289 11:27:30 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:25.289 11:27:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:25.289 11:27:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.289 11:27:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:25.289 ************************************ 00:13:25.289 START TEST bdev_nvme_reset_stuck_adm_cmd 00:13:25.289 ************************************ 00:13:25.289 11:27:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:25.289 * Looking for test storage... 00:13:25.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:25.289 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:25.289 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:13:25.289 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:25.548 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:25.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.548 --rc genhtml_branch_coverage=1 00:13:25.548 --rc genhtml_function_coverage=1 00:13:25.548 --rc genhtml_legend=1 00:13:25.548 --rc geninfo_all_blocks=1 00:13:25.549 --rc geninfo_unexecuted_blocks=1 00:13:25.549 00:13:25.549 ' 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:25.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.549 --rc genhtml_branch_coverage=1 00:13:25.549 --rc genhtml_function_coverage=1 00:13:25.549 --rc genhtml_legend=1 00:13:25.549 --rc geninfo_all_blocks=1 00:13:25.549 --rc geninfo_unexecuted_blocks=1 00:13:25.549 00:13:25.549 ' 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:25.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.549 --rc genhtml_branch_coverage=1 00:13:25.549 --rc genhtml_function_coverage=1 00:13:25.549 --rc genhtml_legend=1 00:13:25.549 --rc geninfo_all_blocks=1 00:13:25.549 --rc geninfo_unexecuted_blocks=1 00:13:25.549 00:13:25.549 ' 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:25.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.549 --rc genhtml_branch_coverage=1 00:13:25.549 --rc genhtml_function_coverage=1 00:13:25.549 --rc genhtml_legend=1 00:13:25.549 --rc geninfo_all_blocks=1 00:13:25.549 --rc geninfo_unexecuted_blocks=1 00:13:25.549 00:13:25.549 ' 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65947 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65947 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65947 ']' 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.549 11:27:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:25.807 [2024-11-20 11:27:31.347868] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:13:25.807 [2024-11-20 11:27:31.348603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65947 ] 00:13:26.065 [2024-11-20 11:27:31.583396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:26.065 [2024-11-20 11:27:31.766967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.065 [2024-11-20 11:27:31.767050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.065 [2024-11-20 11:27:31.767120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.065 [2024-11-20 11:27:31.767128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.457 11:27:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.457 11:27:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:13:27.457 11:27:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:13:27.457 11:27:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.457 11:27:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:27.457 nvme0n1 00:13:27.457 11:27:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.458 11:27:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:13:27.458 11:27:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_8mLsX.txt 00:13:27.458 11:27:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:13:27.458 11:27:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.458 11:27:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:27.458 true 00:13:27.458 11:27:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.458 11:27:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:13:27.458 11:27:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732102052 00:13:27.458 11:27:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65976 00:13:27.458 11:27:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:27.458 11:27:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:13:27.458 11:27:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:13:29.381 11:27:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:13:29.381 11:27:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.381 11:27:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:29.381 [2024-11-20 11:27:34.939604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:13:29.381 [2024-11-20 11:27:34.940022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:29.381 [2024-11-20 11:27:34.940057] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:29.381 [2024-11-20 11:27:34.940078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.381 [2024-11-20 11:27:34.942087] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:13:29.381 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65976 00:13:29.381 11:27:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.381 11:27:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65976 00:13:29.381 11:27:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65976 00:13:29.381 11:27:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:13:29.381 11:27:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:13:29.381 11:27:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:29.381 11:27:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.381 11:27:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:29.381 11:27:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.381 11:27:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:13:29.381 11:27:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_8mLsX.txt 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_8mLsX.txt 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65947 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65947 ']' 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65947 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65947 00:13:29.381 killing process with pid 65947 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65947' 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65947 00:13:29.381 11:27:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65947 00:13:32.655 11:27:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:13:32.655 11:27:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:13:32.655 00:13:32.655 real 0m7.119s 00:13:32.655 user 0m25.016s 00:13:32.655 sys 0m0.824s 00:13:32.655 11:27:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.655 ************************************ 00:13:32.655 END TEST bdev_nvme_reset_stuck_adm_cmd 00:13:32.655 ************************************ 00:13:32.655 11:27:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:32.655 11:27:38 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:13:32.655 11:27:38 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:13:32.655 11:27:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:32.655 11:27:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.655 11:27:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:32.655 ************************************ 00:13:32.655 START TEST nvme_fio 00:13:32.655 ************************************ 00:13:32.655 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:13:32.655 11:27:38 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:13:32.655 11:27:38 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:13:32.655 11:27:38 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:13:32.655 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:32.655 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:13:32.655 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:32.655 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:32.655 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:32.655 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:32.655 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:32.655 11:27:38 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:13:32.655 11:27:38 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:13:32.655 11:27:38 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:32.655 11:27:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:32.655 11:27:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:32.914 11:27:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:32.914 11:27:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:33.173 11:27:38 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:33.173 11:27:38 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:33.173 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:33.173 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:33.173 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:33.173 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:33.173 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:33.173 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:33.173 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:33.173 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:33.173 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:33.173 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:33.173 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:33.173 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:33.173 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:33.173 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:33.173 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:33.173 11:27:38 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:33.431 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:33.431 fio-3.35 00:13:33.431 Starting 1 thread 00:13:36.767 00:13:36.767 test: (groupid=0, jobs=1): err= 0: pid=66132: Wed Nov 20 11:27:42 2024 00:13:36.767 read: IOPS=15.2k, BW=59.5MiB/s (62.4MB/s)(119MiB/2001msec) 00:13:36.767 slat (nsec): min=4687, max=89137, avg=6609.82, stdev=2502.08 00:13:36.767 clat (usec): min=341, max=12598, avg=4176.09, stdev=964.62 00:13:36.767 lat (usec): min=347, max=12687, avg=4182.70, stdev=965.81 00:13:36.767 clat percentiles (usec): 00:13:36.767 | 1.00th=[ 2835], 5.00th=[ 3130], 10.00th=[ 3261], 20.00th=[ 3392], 00:13:36.767 | 30.00th=[ 3523], 40.00th=[ 3687], 50.00th=[ 3884], 60.00th=[ 4178], 00:13:36.767 | 70.00th=[ 4490], 80.00th=[ 4948], 90.00th=[ 5473], 95.00th=[ 6063], 00:13:36.767 | 99.00th=[ 7242], 99.50th=[ 7832], 99.90th=[ 8979], 99.95th=[10945], 00:13:36.767 | 99.99th=[12387] 00:13:36.767 bw ( KiB/s): min=53016, max=68720, per=100.00%, avg=62314.67, stdev=8242.11, samples=3 00:13:36.767 iops : min=13254, max=17180, avg=15578.67, stdev=2060.53, samples=3 00:13:36.767 write: IOPS=15.3k, BW=59.6MiB/s (62.5MB/s)(119MiB/2001msec); 0 zone resets 00:13:36.767 slat (nsec): min=4789, max=73850, avg=6932.05, stdev=2514.05 00:13:36.767 clat (usec): min=262, max=12410, avg=4186.60, stdev=971.54 00:13:36.767 lat (usec): min=269, max=12427, avg=4193.53, stdev=972.68 00:13:36.767 clat percentiles (usec): 00:13:36.767 | 1.00th=[ 2868], 5.00th=[ 3130], 10.00th=[ 3261], 20.00th=[ 3425], 00:13:36.767 | 30.00th=[ 3556], 40.00th=[ 3687], 50.00th=[ 3916], 60.00th=[ 4178], 00:13:36.767 | 70.00th=[ 4555], 80.00th=[ 4948], 90.00th=[ 5473], 95.00th=[ 6063], 00:13:36.767 | 99.00th=[ 7242], 99.50th=[ 7898], 99.90th=[ 9503], 99.95th=[11076], 00:13:36.767 | 99.99th=[12125] 00:13:36.767 bw ( KiB/s): min=53296, max=67800, per=100.00%, avg=61856.00, stdev=7597.64, samples=3 00:13:36.767 iops : min=13324, max=16950, avg=15464.00, stdev=1899.41, samples=3 00:13:36.767 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:13:36.767 lat (msec) : 2=0.06%, 4=53.92%, 10=45.90%, 20=0.08% 00:13:36.767 cpu : usr=98.35%, sys=0.40%, ctx=5, majf=0, minf=607 00:13:36.767 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:36.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:36.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:36.767 issued rwts: total=30488,30540,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:36.767 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:36.767 00:13:36.767 Run status group 0 (all jobs): 00:13:36.767 READ: bw=59.5MiB/s (62.4MB/s), 59.5MiB/s-59.5MiB/s (62.4MB/s-62.4MB/s), io=119MiB (125MB), run=2001-2001msec 00:13:36.767 WRITE: bw=59.6MiB/s (62.5MB/s), 59.6MiB/s-59.6MiB/s (62.5MB/s-62.5MB/s), io=119MiB (125MB), run=2001-2001msec 00:13:36.767 ----------------------------------------------------- 00:13:36.767 Suppressions used: 00:13:36.767 count bytes template 00:13:36.767 1 32 /usr/src/fio/parse.c 00:13:36.767 1 8 libtcmalloc_minimal.so 00:13:36.767 ----------------------------------------------------- 00:13:36.767 00:13:36.767 11:27:42 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:36.768 11:27:42 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:36.768 11:27:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:36.768 11:27:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:37.048 11:27:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:37.048 11:27:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:37.614 11:27:43 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:37.614 11:27:43 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:37.614 11:27:43 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:37.614 11:27:43 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:37.614 11:27:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:37.614 11:27:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:37.614 11:27:43 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:37.614 11:27:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:37.614 11:27:43 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:37.614 11:27:43 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:37.614 11:27:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:37.614 11:27:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:37.614 11:27:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:37.614 11:27:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:37.614 11:27:43 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:37.614 11:27:43 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:37.614 11:27:43 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:37.614 11:27:43 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:37.614 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:37.614 fio-3.35 00:13:37.614 Starting 1 thread 00:13:40.898 00:13:40.898 test: (groupid=0, jobs=1): err= 0: pid=66200: Wed Nov 20 11:27:46 2024 00:13:40.898 read: IOPS=13.8k, BW=53.8MiB/s (56.4MB/s)(108MiB/2001msec) 00:13:40.898 slat (nsec): min=4585, max=98013, avg=8005.92, stdev=4682.26 00:13:40.898 clat (usec): min=443, max=11924, avg=4626.95, stdev=1180.79 00:13:40.898 lat (usec): min=450, max=11941, avg=4634.96, stdev=1184.12 00:13:40.898 clat percentiles (usec): 00:13:40.898 | 1.00th=[ 2704], 5.00th=[ 3195], 10.00th=[ 3458], 20.00th=[ 3884], 00:13:40.898 | 30.00th=[ 4178], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4490], 00:13:40.898 | 70.00th=[ 4621], 80.00th=[ 5014], 90.00th=[ 5997], 95.00th=[ 7570], 00:13:40.898 | 99.00th=[ 8586], 99.50th=[ 8586], 99.90th=[ 9110], 99.95th=[10028], 00:13:40.898 | 99.99th=[11469] 00:13:40.898 bw ( KiB/s): min=52496, max=62056, per=100.00%, avg=58021.33, stdev=4951.26, samples=3 00:13:40.898 iops : min=13124, max=15514, avg=14505.33, stdev=1237.81, samples=3 00:13:40.898 write: IOPS=13.8k, BW=53.8MiB/s (56.4MB/s)(108MiB/2001msec); 0 zone resets 00:13:40.898 slat (usec): min=4, max=898, avg= 8.38, stdev= 8.43 00:13:40.898 clat (usec): min=314, max=11695, avg=4635.11, stdev=1182.60 00:13:40.898 lat (usec): min=321, max=11728, avg=4643.49, stdev=1186.09 00:13:40.898 clat percentiles (usec): 00:13:40.898 | 1.00th=[ 2769], 5.00th=[ 3228], 10.00th=[ 3490], 20.00th=[ 3916], 00:13:40.898 | 30.00th=[ 4178], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4490], 00:13:40.898 | 70.00th=[ 4621], 80.00th=[ 4948], 90.00th=[ 5997], 95.00th=[ 7635], 00:13:40.898 | 99.00th=[ 8586], 99.50th=[ 8586], 99.90th=[ 9110], 99.95th=[10421], 00:13:40.898 | 99.99th=[11207] 00:13:40.898 bw ( KiB/s): min=52360, max=62544, per=100.00%, avg=57917.33, stdev=5155.39, samples=3 00:13:40.898 iops : min=13090, max=15636, avg=14479.33, stdev=1288.85, samples=3 00:13:40.898 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:13:40.898 lat (msec) : 2=0.24%, 4=21.54%, 10=78.13%, 20=0.06% 00:13:40.898 cpu : usr=98.55%, sys=0.35%, ctx=3, majf=0, minf=608 00:13:40.898 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:40.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:40.898 issued rwts: total=27572,27537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.898 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:40.898 00:13:40.898 Run status group 0 (all jobs): 00:13:40.898 READ: bw=53.8MiB/s (56.4MB/s), 53.8MiB/s-53.8MiB/s (56.4MB/s-56.4MB/s), io=108MiB (113MB), run=2001-2001msec 00:13:40.898 WRITE: bw=53.8MiB/s (56.4MB/s), 53.8MiB/s-53.8MiB/s (56.4MB/s-56.4MB/s), io=108MiB (113MB), run=2001-2001msec 00:13:40.898 ----------------------------------------------------- 00:13:40.898 Suppressions used: 00:13:40.898 count bytes template 00:13:40.898 1 32 /usr/src/fio/parse.c 00:13:40.898 1 8 libtcmalloc_minimal.so 00:13:40.898 ----------------------------------------------------- 00:13:40.898 00:13:40.898 11:27:46 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:40.898 11:27:46 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:40.898 11:27:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:40.898 11:27:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:41.156 11:27:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:41.156 11:27:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:41.414 11:27:47 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:41.414 11:27:47 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:41.414 11:27:47 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:41.414 11:27:47 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:41.414 11:27:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:41.414 11:27:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:41.414 11:27:47 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:41.414 11:27:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:41.414 11:27:47 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:41.414 11:27:47 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:41.414 11:27:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:41.414 11:27:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:41.414 11:27:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:41.414 11:27:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:41.414 11:27:47 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:41.414 11:27:47 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:41.414 11:27:47 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:41.414 11:27:47 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:41.672 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:41.672 fio-3.35 00:13:41.672 Starting 1 thread 00:13:45.858 00:13:45.858 test: (groupid=0, jobs=1): err= 0: pid=66262: Wed Nov 20 11:27:50 2024 00:13:45.858 read: IOPS=16.9k, BW=66.2MiB/s (69.4MB/s)(132MiB/2001msec) 00:13:45.858 slat (usec): min=4, max=441, avg= 5.95, stdev= 4.33 00:13:45.858 clat (usec): min=280, max=10355, avg=3763.22, stdev=681.84 00:13:45.858 lat (usec): min=296, max=10394, avg=3769.18, stdev=682.54 00:13:45.858 clat percentiles (usec): 00:13:45.858 | 1.00th=[ 2343], 5.00th=[ 2933], 10.00th=[ 3130], 20.00th=[ 3261], 00:13:45.858 | 30.00th=[ 3359], 40.00th=[ 3490], 50.00th=[ 3621], 60.00th=[ 3752], 00:13:45.858 | 70.00th=[ 4015], 80.00th=[ 4293], 90.00th=[ 4686], 95.00th=[ 5014], 00:13:45.858 | 99.00th=[ 5538], 99.50th=[ 5800], 99.90th=[ 7177], 99.95th=[ 8586], 00:13:45.858 | 99.99th=[10159] 00:13:45.858 bw ( KiB/s): min=65248, max=70040, per=99.32%, avg=67285.33, stdev=2475.23, samples=3 00:13:45.858 iops : min=16312, max=17510, avg=16821.33, stdev=618.81, samples=3 00:13:45.858 write: IOPS=17.0k, BW=66.3MiB/s (69.6MB/s)(133MiB/2001msec); 0 zone resets 00:13:45.858 slat (usec): min=4, max=742, avg= 6.39, stdev= 6.33 00:13:45.858 clat (usec): min=223, max=10161, avg=3755.76, stdev=676.95 00:13:45.858 lat (usec): min=229, max=10176, avg=3762.15, stdev=677.67 00:13:45.858 clat percentiles (usec): 00:13:45.858 | 1.00th=[ 2343], 5.00th=[ 2933], 10.00th=[ 3130], 20.00th=[ 3294], 00:13:45.858 | 30.00th=[ 3359], 40.00th=[ 3490], 50.00th=[ 3589], 60.00th=[ 3752], 00:13:45.858 | 70.00th=[ 3982], 80.00th=[ 4293], 90.00th=[ 4686], 95.00th=[ 5014], 00:13:45.858 | 99.00th=[ 5538], 99.50th=[ 5800], 99.90th=[ 7242], 99.95th=[ 8848], 00:13:45.858 | 99.99th=[ 9896] 00:13:45.858 bw ( KiB/s): min=65120, max=69736, per=98.94%, avg=67216.00, stdev=2337.03, samples=3 00:13:45.858 iops : min=16280, max=17434, avg=16804.00, stdev=584.26, samples=3 00:13:45.858 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:13:45.858 lat (msec) : 2=0.36%, 4=69.75%, 10=29.84%, 20=0.01% 00:13:45.858 cpu : usr=98.15%, sys=0.35%, ctx=16, majf=0, minf=607 00:13:45.858 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:45.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:45.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:45.858 issued rwts: total=33889,33984,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:45.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:45.858 00:13:45.858 Run status group 0 (all jobs): 00:13:45.858 READ: bw=66.2MiB/s (69.4MB/s), 66.2MiB/s-66.2MiB/s (69.4MB/s-69.4MB/s), io=132MiB (139MB), run=2001-2001msec 00:13:45.858 WRITE: bw=66.3MiB/s (69.6MB/s), 66.3MiB/s-66.3MiB/s (69.6MB/s-69.6MB/s), io=133MiB (139MB), run=2001-2001msec 00:13:45.858 ----------------------------------------------------- 00:13:45.858 Suppressions used: 00:13:45.858 count bytes template 00:13:45.858 1 32 /usr/src/fio/parse.c 00:13:45.858 1 8 libtcmalloc_minimal.so 00:13:45.858 ----------------------------------------------------- 00:13:45.858 00:13:45.858 11:27:50 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:45.858 11:27:50 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:45.858 11:27:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:45.858 11:27:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:45.858 11:27:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:45.858 11:27:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:45.858 11:27:51 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:45.858 11:27:51 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:45.858 11:27:51 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:45.858 11:27:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:45.858 11:27:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:45.858 11:27:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:45.858 11:27:51 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:45.858 11:27:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:45.858 11:27:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:45.858 11:27:51 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:45.858 11:27:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:45.858 11:27:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:45.858 11:27:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:46.118 11:27:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:46.118 11:27:51 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:46.118 11:27:51 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:46.118 11:27:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:46.118 11:27:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:46.118 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:46.118 fio-3.35 00:13:46.118 Starting 1 thread 00:13:51.392 00:13:51.392 test: (groupid=0, jobs=1): err= 0: pid=66328: Wed Nov 20 11:27:56 2024 00:13:51.392 read: IOPS=17.7k, BW=69.1MiB/s (72.4MB/s)(138MiB/2001msec) 00:13:51.392 slat (nsec): min=4561, max=63557, avg=5810.33, stdev=1768.22 00:13:51.392 clat (usec): min=326, max=9032, avg=3605.66, stdev=638.76 00:13:51.392 lat (usec): min=332, max=9069, avg=3611.47, stdev=639.55 00:13:51.392 clat percentiles (usec): 00:13:51.392 | 1.00th=[ 2737], 5.00th=[ 3130], 10.00th=[ 3195], 20.00th=[ 3228], 00:13:51.392 | 30.00th=[ 3294], 40.00th=[ 3326], 50.00th=[ 3359], 60.00th=[ 3425], 00:13:51.392 | 70.00th=[ 3523], 80.00th=[ 4178], 90.00th=[ 4424], 95.00th=[ 4555], 00:13:51.392 | 99.00th=[ 6259], 99.50th=[ 6980], 99.90th=[ 8225], 99.95th=[ 8356], 00:13:51.392 | 99.99th=[ 8979] 00:13:51.392 bw ( KiB/s): min=61400, max=76760, per=97.69%, avg=69104.00, stdev=7680.11, samples=3 00:13:51.392 iops : min=15352, max=19190, avg=17276.67, stdev=1919.03, samples=3 00:13:51.392 write: IOPS=17.7k, BW=69.1MiB/s (72.4MB/s)(138MiB/2001msec); 0 zone resets 00:13:51.392 slat (nsec): min=4721, max=63314, avg=6113.59, stdev=1804.97 00:13:51.392 clat (usec): min=353, max=8961, avg=3607.69, stdev=645.51 00:13:51.392 lat (usec): min=360, max=8974, avg=3613.80, stdev=646.29 00:13:51.392 clat percentiles (usec): 00:13:51.392 | 1.00th=[ 2737], 5.00th=[ 3130], 10.00th=[ 3195], 20.00th=[ 3228], 00:13:51.392 | 30.00th=[ 3294], 40.00th=[ 3326], 50.00th=[ 3359], 60.00th=[ 3425], 00:13:51.392 | 70.00th=[ 3523], 80.00th=[ 4178], 90.00th=[ 4424], 95.00th=[ 4555], 00:13:51.392 | 99.00th=[ 6325], 99.50th=[ 7242], 99.90th=[ 8160], 99.95th=[ 8356], 00:13:51.392 | 99.99th=[ 8848] 00:13:51.392 bw ( KiB/s): min=61768, max=76400, per=97.55%, avg=68997.33, stdev=7317.54, samples=3 00:13:51.392 iops : min=15442, max=19100, avg=17249.33, stdev=1829.38, samples=3 00:13:51.392 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:13:51.392 lat (msec) : 2=0.17%, 4=75.65%, 10=24.16% 00:13:51.392 cpu : usr=99.15%, sys=0.05%, ctx=8, majf=0, minf=605 00:13:51.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:51.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.392 issued rwts: total=35388,35382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.392 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.392 00:13:51.392 Run status group 0 (all jobs): 00:13:51.392 READ: bw=69.1MiB/s (72.4MB/s), 69.1MiB/s-69.1MiB/s (72.4MB/s-72.4MB/s), io=138MiB (145MB), run=2001-2001msec 00:13:51.392 WRITE: bw=69.1MiB/s (72.4MB/s), 69.1MiB/s-69.1MiB/s (72.4MB/s-72.4MB/s), io=138MiB (145MB), run=2001-2001msec 00:13:51.392 ----------------------------------------------------- 00:13:51.392 Suppressions used: 00:13:51.392 count bytes template 00:13:51.392 1 32 /usr/src/fio/parse.c 00:13:51.392 1 8 libtcmalloc_minimal.so 00:13:51.392 ----------------------------------------------------- 00:13:51.392 00:13:51.392 11:27:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:51.392 11:27:56 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:13:51.392 00:13:51.392 real 0m18.482s 00:13:51.392 user 0m14.025s 00:13:51.392 sys 0m4.087s 00:13:51.392 11:27:56 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.392 ************************************ 00:13:51.392 END TEST nvme_fio 00:13:51.392 ************************************ 00:13:51.392 11:27:56 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:13:51.392 ************************************ 00:13:51.392 END TEST nvme 00:13:51.392 ************************************ 00:13:51.392 00:13:51.392 real 1m35.643s 00:13:51.392 user 3m48.277s 00:13:51.392 sys 0m23.986s 00:13:51.392 11:27:56 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.392 11:27:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:51.392 11:27:56 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:13:51.392 11:27:56 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:51.392 11:27:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:51.392 11:27:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.392 11:27:56 -- common/autotest_common.sh@10 -- # set +x 00:13:51.392 ************************************ 00:13:51.392 START TEST nvme_scc 00:13:51.392 ************************************ 00:13:51.392 11:27:56 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:51.392 * Looking for test storage... 00:13:51.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:51.392 11:27:56 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:51.392 11:27:56 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:51.392 11:27:56 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:51.392 11:27:56 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:51.392 11:27:56 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@345 -- # : 1 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@368 -- # return 0 00:13:51.393 11:27:56 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:51.393 11:27:56 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:51.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.393 --rc genhtml_branch_coverage=1 00:13:51.393 --rc genhtml_function_coverage=1 00:13:51.393 --rc genhtml_legend=1 00:13:51.393 --rc geninfo_all_blocks=1 00:13:51.393 --rc geninfo_unexecuted_blocks=1 00:13:51.393 00:13:51.393 ' 00:13:51.393 11:27:56 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:51.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.393 --rc genhtml_branch_coverage=1 00:13:51.393 --rc genhtml_function_coverage=1 00:13:51.393 --rc genhtml_legend=1 00:13:51.393 --rc geninfo_all_blocks=1 00:13:51.393 --rc geninfo_unexecuted_blocks=1 00:13:51.393 00:13:51.393 ' 00:13:51.393 11:27:56 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:51.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.393 --rc genhtml_branch_coverage=1 00:13:51.393 --rc genhtml_function_coverage=1 00:13:51.393 --rc genhtml_legend=1 00:13:51.393 --rc geninfo_all_blocks=1 00:13:51.393 --rc geninfo_unexecuted_blocks=1 00:13:51.393 00:13:51.393 ' 00:13:51.393 11:27:56 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:51.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.393 --rc genhtml_branch_coverage=1 00:13:51.393 --rc genhtml_function_coverage=1 00:13:51.393 --rc genhtml_legend=1 00:13:51.393 --rc geninfo_all_blocks=1 00:13:51.393 --rc geninfo_unexecuted_blocks=1 00:13:51.393 00:13:51.393 ' 00:13:51.393 11:27:56 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:51.393 11:27:56 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:51.393 11:27:56 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:51.393 11:27:56 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:51.393 11:27:56 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.393 11:27:56 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.393 11:27:56 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.393 11:27:56 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.393 11:27:56 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.393 11:27:56 nvme_scc -- paths/export.sh@5 -- # export PATH 00:13:51.393 11:27:56 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.393 11:27:56 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:13:51.393 11:27:56 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:51.393 11:27:56 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:13:51.393 11:27:56 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:51.393 11:27:56 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:13:51.393 11:27:56 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:51.393 11:27:56 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:51.393 11:27:56 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:51.393 11:27:56 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:13:51.393 11:27:56 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:51.393 11:27:56 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:13:51.393 11:27:56 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:13:51.393 11:27:56 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:13:51.393 11:27:56 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:51.651 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:51.910 Waiting for block devices as requested 00:13:51.910 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:51.910 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:52.169 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:52.169 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:57.565 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:57.565 11:28:02 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:13:57.565 11:28:02 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:57.565 11:28:02 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:57.565 11:28:02 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:57.565 11:28:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:57.565 11:28:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:57.565 11:28:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:57.565 11:28:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:57.565 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.566 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.567 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.568 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:57.569 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:57.570 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:57.571 11:28:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:57.571 11:28:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:57.571 11:28:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:57.571 11:28:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.571 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:57.572 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.573 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:57.574 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:13:57.575 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:13:57.839 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:57.840 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.841 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:57.842 11:28:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:57.842 11:28:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:57.842 11:28:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:57.842 11:28:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:57.842 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:57.843 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.844 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.845 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:13:57.846 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:57.847 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.848 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:57.849 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:57.850 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:13:58.113 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:58.114 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:58.115 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.116 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.117 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.118 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.119 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:58.120 11:28:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:58.120 11:28:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:58.120 11:28:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:58.120 11:28:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:58.120 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.121 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.384 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:58.385 11:28:03 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:58.385 11:28:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:13:58.386 11:28:03 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:13:58.386 11:28:03 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:13:58.386 11:28:03 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:13:58.386 11:28:03 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:58.971 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:59.538 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:59.538 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:59.797 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:59.797 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:59.797 11:28:05 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:59.797 11:28:05 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:59.797 11:28:05 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.797 11:28:05 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:59.797 ************************************ 00:13:59.797 START TEST nvme_simple_copy 00:13:59.797 ************************************ 00:13:59.797 11:28:05 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:14:00.057 Initializing NVMe Controllers 00:14:00.057 Attaching to 0000:00:10.0 00:14:00.057 Controller supports SCC. Attached to 0000:00:10.0 00:14:00.057 Namespace ID: 1 size: 6GB 00:14:00.057 Initialization complete. 00:14:00.057 00:14:00.057 Controller QEMU NVMe Ctrl (12340 ) 00:14:00.057 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:14:00.057 Namespace Block Size:4096 00:14:00.057 Writing LBAs 0 to 63 with Random Data 00:14:00.057 Copied LBAs from 0 - 63 to the Destination LBA 256 00:14:00.057 LBAs matching Written Data: 64 00:14:00.057 00:14:00.057 real 0m0.314s 00:14:00.057 user 0m0.116s 00:14:00.057 sys 0m0.095s 00:14:00.057 11:28:05 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.057 11:28:05 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:14:00.057 ************************************ 00:14:00.057 END TEST nvme_simple_copy 00:14:00.057 ************************************ 00:14:00.315 ************************************ 00:14:00.315 END TEST nvme_scc 00:14:00.315 ************************************ 00:14:00.315 00:14:00.315 real 0m9.176s 00:14:00.315 user 0m1.741s 00:14:00.315 sys 0m2.133s 00:14:00.315 11:28:05 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.315 11:28:05 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:14:00.315 11:28:05 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:14:00.315 11:28:05 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:14:00.315 11:28:05 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:14:00.315 11:28:05 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:14:00.315 11:28:05 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:14:00.315 11:28:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:00.315 11:28:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.315 11:28:05 -- common/autotest_common.sh@10 -- # set +x 00:14:00.315 ************************************ 00:14:00.315 START TEST nvme_fdp 00:14:00.315 ************************************ 00:14:00.315 11:28:05 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:14:00.315 * Looking for test storage... 00:14:00.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:00.315 11:28:05 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:00.315 11:28:06 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:14:00.315 11:28:06 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:00.315 11:28:06 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:00.315 11:28:06 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:00.315 11:28:06 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:00.315 11:28:06 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:00.315 11:28:06 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:14:00.315 11:28:06 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:14:00.315 11:28:06 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:14:00.315 11:28:06 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:14:00.315 11:28:06 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:14:00.315 11:28:06 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:14:00.315 11:28:06 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:14:00.315 11:28:06 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:00.315 11:28:06 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:14:00.315 11:28:06 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:14:00.315 11:28:06 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:00.315 11:28:06 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:00.315 11:28:06 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:14:00.315 11:28:06 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:14:00.574 11:28:06 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:00.575 11:28:06 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:14:00.575 11:28:06 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:14:00.575 11:28:06 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:14:00.575 11:28:06 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:14:00.575 11:28:06 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:00.575 11:28:06 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:14:00.575 11:28:06 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:14:00.575 11:28:06 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:00.575 11:28:06 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:00.575 11:28:06 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:14:00.575 11:28:06 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:00.575 11:28:06 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:00.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.575 --rc genhtml_branch_coverage=1 00:14:00.575 --rc genhtml_function_coverage=1 00:14:00.575 --rc genhtml_legend=1 00:14:00.575 --rc geninfo_all_blocks=1 00:14:00.575 --rc geninfo_unexecuted_blocks=1 00:14:00.575 00:14:00.575 ' 00:14:00.575 11:28:06 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:00.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.575 --rc genhtml_branch_coverage=1 00:14:00.575 --rc genhtml_function_coverage=1 00:14:00.575 --rc genhtml_legend=1 00:14:00.575 --rc geninfo_all_blocks=1 00:14:00.575 --rc geninfo_unexecuted_blocks=1 00:14:00.575 00:14:00.575 ' 00:14:00.575 11:28:06 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:00.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.575 --rc genhtml_branch_coverage=1 00:14:00.575 --rc genhtml_function_coverage=1 00:14:00.575 --rc genhtml_legend=1 00:14:00.575 --rc geninfo_all_blocks=1 00:14:00.575 --rc geninfo_unexecuted_blocks=1 00:14:00.575 00:14:00.575 ' 00:14:00.575 11:28:06 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:00.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.575 --rc genhtml_branch_coverage=1 00:14:00.575 --rc genhtml_function_coverage=1 00:14:00.575 --rc genhtml_legend=1 00:14:00.575 --rc geninfo_all_blocks=1 00:14:00.575 --rc geninfo_unexecuted_blocks=1 00:14:00.575 00:14:00.575 ' 00:14:00.575 11:28:06 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:14:00.575 11:28:06 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:14:00.575 11:28:06 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:14:00.575 11:28:06 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:00.575 11:28:06 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:00.575 11:28:06 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:14:00.575 11:28:06 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.575 11:28:06 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.575 11:28:06 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.575 11:28:06 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.575 11:28:06 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.575 11:28:06 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.575 11:28:06 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:14:00.575 11:28:06 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.575 11:28:06 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:14:00.575 11:28:06 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:14:00.575 11:28:06 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:14:00.575 11:28:06 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:14:00.575 11:28:06 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:14:00.575 11:28:06 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:14:00.575 11:28:06 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:14:00.575 11:28:06 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:14:00.575 11:28:06 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:14:00.575 11:28:06 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:00.575 11:28:06 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:00.833 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:01.092 Waiting for block devices as requested 00:14:01.092 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:01.350 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:01.350 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:01.610 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:06.891 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:06.891 11:28:12 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:14:06.891 11:28:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:14:06.891 11:28:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:14:06.891 11:28:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:06.891 11:28:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:14:06.891 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.892 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:14:06.893 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.894 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:14:06.895 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:14:06.896 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.897 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.898 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:14:06.899 11:28:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:14:06.899 11:28:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:14:06.899 11:28:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:06.899 11:28:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.899 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:14:06.900 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.901 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.902 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:14:06.903 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.904 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.905 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:14:06.906 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:14:06.907 11:28:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:14:06.907 11:28:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:14:06.907 11:28:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:06.907 11:28:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:14:06.907 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:14:06.908 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:14:06.909 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.910 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.911 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:06.912 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.175 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:14:07.176 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.177 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.178 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.179 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:14:07.180 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:14:07.181 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:07.182 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:14:07.183 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:14:07.184 11:28:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:14:07.184 11:28:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:14:07.184 11:28:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:07.184 11:28:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:14:07.184 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.185 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.445 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:14:07.446 11:28:12 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:14:07.446 11:28:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:14:07.446 11:28:13 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:14:07.446 11:28:13 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:14:07.446 11:28:13 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:14:07.446 11:28:13 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:14:07.446 11:28:13 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:14:07.446 11:28:13 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:14:07.446 11:28:13 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:14:07.447 11:28:13 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:14:07.447 11:28:13 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:14:07.447 11:28:13 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:14:07.447 11:28:13 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:14:07.447 11:28:13 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:14:07.447 11:28:13 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:08.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:08.578 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:08.578 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:08.838 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:08.838 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:08.838 11:28:14 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:14:08.838 11:28:14 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:08.838 11:28:14 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.838 11:28:14 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:14:08.838 ************************************ 00:14:08.838 START TEST nvme_flexible_data_placement 00:14:08.838 ************************************ 00:14:08.838 11:28:14 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:14:09.096 Initializing NVMe Controllers 00:14:09.096 Attaching to 0000:00:13.0 00:14:09.096 Controller supports FDP Attached to 0000:00:13.0 00:14:09.096 Namespace ID: 1 Endurance Group ID: 1 00:14:09.096 Initialization complete. 00:14:09.096 00:14:09.096 ================================== 00:14:09.096 == FDP tests for Namespace: #01 == 00:14:09.096 ================================== 00:14:09.096 00:14:09.096 Get Feature: FDP: 00:14:09.096 ================= 00:14:09.096 Enabled: Yes 00:14:09.096 FDP configuration Index: 0 00:14:09.096 00:14:09.096 FDP configurations log page 00:14:09.096 =========================== 00:14:09.096 Number of FDP configurations: 1 00:14:09.096 Version: 0 00:14:09.096 Size: 112 00:14:09.096 FDP Configuration Descriptor: 0 00:14:09.096 Descriptor Size: 96 00:14:09.096 Reclaim Group Identifier format: 2 00:14:09.096 FDP Volatile Write Cache: Not Present 00:14:09.096 FDP Configuration: Valid 00:14:09.096 Vendor Specific Size: 0 00:14:09.096 Number of Reclaim Groups: 2 00:14:09.096 Number of Recalim Unit Handles: 8 00:14:09.096 Max Placement Identifiers: 128 00:14:09.096 Number of Namespaces Suppprted: 256 00:14:09.096 Reclaim unit Nominal Size: 6000000 bytes 00:14:09.096 Estimated Reclaim Unit Time Limit: Not Reported 00:14:09.096 RUH Desc #000: RUH Type: Initially Isolated 00:14:09.096 RUH Desc #001: RUH Type: Initially Isolated 00:14:09.096 RUH Desc #002: RUH Type: Initially Isolated 00:14:09.096 RUH Desc #003: RUH Type: Initially Isolated 00:14:09.096 RUH Desc #004: RUH Type: Initially Isolated 00:14:09.096 RUH Desc #005: RUH Type: Initially Isolated 00:14:09.096 RUH Desc #006: RUH Type: Initially Isolated 00:14:09.096 RUH Desc #007: RUH Type: Initially Isolated 00:14:09.096 00:14:09.096 FDP reclaim unit handle usage log page 00:14:09.096 ====================================== 00:14:09.096 Number of Reclaim Unit Handles: 8 00:14:09.096 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:14:09.096 RUH Usage Desc #001: RUH Attributes: Unused 00:14:09.096 RUH Usage Desc #002: RUH Attributes: Unused 00:14:09.096 RUH Usage Desc #003: RUH Attributes: Unused 00:14:09.096 RUH Usage Desc #004: RUH Attributes: Unused 00:14:09.096 RUH Usage Desc #005: RUH Attributes: Unused 00:14:09.096 RUH Usage Desc #006: RUH Attributes: Unused 00:14:09.096 RUH Usage Desc #007: RUH Attributes: Unused 00:14:09.096 00:14:09.096 FDP statistics log page 00:14:09.096 ======================= 00:14:09.096 Host bytes with metadata written: 788598784 00:14:09.096 Media bytes with metadata written: 788672512 00:14:09.096 Media bytes erased: 0 00:14:09.096 00:14:09.096 FDP Reclaim unit handle status 00:14:09.096 ============================== 00:14:09.096 Number of RUHS descriptors: 2 00:14:09.096 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000fef 00:14:09.096 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:14:09.096 00:14:09.096 FDP write on placement id: 0 success 00:14:09.096 00:14:09.096 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:14:09.096 00:14:09.096 IO mgmt send: RUH update for Placement ID: #0 Success 00:14:09.096 00:14:09.096 Get Feature: FDP Events for Placement handle: #0 00:14:09.096 ======================== 00:14:09.096 Number of FDP Events: 6 00:14:09.096 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:14:09.096 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:14:09.096 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:14:09.096 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:14:09.096 FDP Event: #4 Type: Media Reallocated Enabled: No 00:14:09.096 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:14:09.096 00:14:09.096 FDP events log page 00:14:09.096 =================== 00:14:09.096 Number of FDP events: 1 00:14:09.096 FDP Event #0: 00:14:09.096 Event Type: RU Not Written to Capacity 00:14:09.096 Placement Identifier: Valid 00:14:09.096 NSID: Valid 00:14:09.096 Location: Valid 00:14:09.096 Placement Identifier: 0 00:14:09.096 Event Timestamp: b 00:14:09.096 Namespace Identifier: 1 00:14:09.096 Reclaim Group Identifier: 0 00:14:09.096 Reclaim Unit Handle Identifier: 0 00:14:09.096 00:14:09.096 FDP test passed 00:14:09.354 ************************************ 00:14:09.354 END TEST nvme_flexible_data_placement 00:14:09.354 ************************************ 00:14:09.354 00:14:09.354 real 0m0.362s 00:14:09.354 user 0m0.129s 00:14:09.354 sys 0m0.131s 00:14:09.354 11:28:14 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.354 11:28:14 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:14:09.354 ************************************ 00:14:09.354 END TEST nvme_fdp 00:14:09.354 ************************************ 00:14:09.354 00:14:09.354 real 0m9.013s 00:14:09.354 user 0m1.720s 00:14:09.354 sys 0m2.238s 00:14:09.354 11:28:14 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.354 11:28:14 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:14:09.354 11:28:14 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:14:09.354 11:28:14 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:14:09.354 11:28:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:09.354 11:28:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.354 11:28:14 -- common/autotest_common.sh@10 -- # set +x 00:14:09.354 ************************************ 00:14:09.354 START TEST nvme_rpc 00:14:09.354 ************************************ 00:14:09.354 11:28:14 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:14:09.354 * Looking for test storage... 00:14:09.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:09.354 11:28:15 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:09.354 11:28:15 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:14:09.354 11:28:15 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.613 11:28:15 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:09.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.613 --rc genhtml_branch_coverage=1 00:14:09.613 --rc genhtml_function_coverage=1 00:14:09.613 --rc genhtml_legend=1 00:14:09.613 --rc geninfo_all_blocks=1 00:14:09.613 --rc geninfo_unexecuted_blocks=1 00:14:09.613 00:14:09.613 ' 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:09.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.613 --rc genhtml_branch_coverage=1 00:14:09.613 --rc genhtml_function_coverage=1 00:14:09.613 --rc genhtml_legend=1 00:14:09.613 --rc geninfo_all_blocks=1 00:14:09.613 --rc geninfo_unexecuted_blocks=1 00:14:09.613 00:14:09.613 ' 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:09.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.613 --rc genhtml_branch_coverage=1 00:14:09.613 --rc genhtml_function_coverage=1 00:14:09.613 --rc genhtml_legend=1 00:14:09.613 --rc geninfo_all_blocks=1 00:14:09.613 --rc geninfo_unexecuted_blocks=1 00:14:09.613 00:14:09.613 ' 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:09.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.613 --rc genhtml_branch_coverage=1 00:14:09.613 --rc genhtml_function_coverage=1 00:14:09.613 --rc genhtml_legend=1 00:14:09.613 --rc geninfo_all_blocks=1 00:14:09.613 --rc geninfo_unexecuted_blocks=1 00:14:09.613 00:14:09.613 ' 00:14:09.613 11:28:15 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:09.613 11:28:15 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:14:09.613 11:28:15 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:14:09.613 11:28:15 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67741 00:14:09.613 11:28:15 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:14:09.613 11:28:15 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:14:09.613 11:28:15 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67741 00:14:09.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67741 ']' 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.613 11:28:15 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.872 [2024-11-20 11:28:15.503025] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:14:09.872 [2024-11-20 11:28:15.503508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67741 ] 00:14:10.130 [2024-11-20 11:28:15.698896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:10.130 [2024-11-20 11:28:15.855585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.130 [2024-11-20 11:28:15.855615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.502 11:28:16 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.503 11:28:16 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:11.503 11:28:16 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:14:11.503 Nvme0n1 00:14:11.503 11:28:17 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:14:11.503 11:28:17 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:14:11.760 request: 00:14:11.760 { 00:14:11.760 "bdev_name": "Nvme0n1", 00:14:11.760 "filename": "non_existing_file", 00:14:11.760 "method": "bdev_nvme_apply_firmware", 00:14:11.760 "req_id": 1 00:14:11.760 } 00:14:11.760 Got JSON-RPC error response 00:14:11.760 response: 00:14:11.760 { 00:14:11.760 "code": -32603, 00:14:11.760 "message": "open file failed." 00:14:11.760 } 00:14:11.760 11:28:17 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:14:11.760 11:28:17 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:14:11.760 11:28:17 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:14:12.018 11:28:17 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:12.018 11:28:17 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67741 00:14:12.018 11:28:17 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67741 ']' 00:14:12.018 11:28:17 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67741 00:14:12.018 11:28:17 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:12.018 11:28:17 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.018 11:28:17 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67741 00:14:12.018 killing process with pid 67741 00:14:12.018 11:28:17 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.018 11:28:17 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.018 11:28:17 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67741' 00:14:12.018 11:28:17 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67741 00:14:12.018 11:28:17 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67741 00:14:15.390 ************************************ 00:14:15.390 END TEST nvme_rpc 00:14:15.390 ************************************ 00:14:15.390 00:14:15.390 real 0m5.477s 00:14:15.390 user 0m10.261s 00:14:15.390 sys 0m0.841s 00:14:15.390 11:28:20 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.390 11:28:20 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.390 11:28:20 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:14:15.390 11:28:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:15.390 11:28:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.390 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:14:15.390 ************************************ 00:14:15.390 START TEST nvme_rpc_timeouts 00:14:15.390 ************************************ 00:14:15.390 11:28:20 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:14:15.390 * Looking for test storage... 00:14:15.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:15.390 11:28:20 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:15.390 11:28:20 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:15.390 11:28:20 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:14:15.390 11:28:20 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:15.390 11:28:20 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:14:15.390 11:28:20 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.390 11:28:20 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:15.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.390 --rc genhtml_branch_coverage=1 00:14:15.390 --rc genhtml_function_coverage=1 00:14:15.390 --rc genhtml_legend=1 00:14:15.390 --rc geninfo_all_blocks=1 00:14:15.390 --rc geninfo_unexecuted_blocks=1 00:14:15.390 00:14:15.390 ' 00:14:15.390 11:28:20 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:15.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.390 --rc genhtml_branch_coverage=1 00:14:15.390 --rc genhtml_function_coverage=1 00:14:15.390 --rc genhtml_legend=1 00:14:15.390 --rc geninfo_all_blocks=1 00:14:15.390 --rc geninfo_unexecuted_blocks=1 00:14:15.390 00:14:15.390 ' 00:14:15.390 11:28:20 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:15.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.390 --rc genhtml_branch_coverage=1 00:14:15.390 --rc genhtml_function_coverage=1 00:14:15.390 --rc genhtml_legend=1 00:14:15.390 --rc geninfo_all_blocks=1 00:14:15.390 --rc geninfo_unexecuted_blocks=1 00:14:15.390 00:14:15.390 ' 00:14:15.390 11:28:20 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:15.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.390 --rc genhtml_branch_coverage=1 00:14:15.390 --rc genhtml_function_coverage=1 00:14:15.390 --rc genhtml_legend=1 00:14:15.390 --rc geninfo_all_blocks=1 00:14:15.390 --rc geninfo_unexecuted_blocks=1 00:14:15.390 00:14:15.390 ' 00:14:15.390 11:28:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:15.390 11:28:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67823 00:14:15.390 11:28:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67823 00:14:15.390 11:28:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67860 00:14:15.390 11:28:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:14:15.390 11:28:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:14:15.390 11:28:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67860 00:14:15.390 11:28:20 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67860 ']' 00:14:15.391 11:28:20 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.391 11:28:20 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.391 11:28:20 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.391 11:28:20 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.391 11:28:20 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:14:15.391 [2024-11-20 11:28:20.927021] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:14:15.391 [2024-11-20 11:28:20.927589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67860 ] 00:14:15.649 [2024-11-20 11:28:21.158393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:15.649 [2024-11-20 11:28:21.316598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.649 [2024-11-20 11:28:21.316611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.024 Checking default timeout settings: 00:14:17.024 11:28:22 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.024 11:28:22 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:14:17.024 11:28:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:14:17.024 11:28:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:17.282 Making settings changes with rpc: 00:14:17.282 11:28:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:14:17.282 11:28:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:14:17.540 Check default vs. modified settings: 00:14:17.540 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:14:17.540 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:18.111 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:14:18.111 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:18.111 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:18.111 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67823 00:14:18.111 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:18.111 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:14:18.111 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67823 00:14:18.111 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:18.111 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:18.111 Setting action_on_timeout is changed as expected. 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67823 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67823 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:18.112 Setting timeout_us is changed as expected. 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67823 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67823 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:18.112 Setting timeout_admin_us is changed as expected. 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67823 /tmp/settings_modified_67823 00:14:18.112 11:28:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67860 00:14:18.112 11:28:23 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67860 ']' 00:14:18.112 11:28:23 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67860 00:14:18.112 11:28:23 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:14:18.112 11:28:23 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:18.112 11:28:23 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67860 00:14:18.112 killing process with pid 67860 00:14:18.112 11:28:23 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:18.112 11:28:23 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:18.112 11:28:23 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67860' 00:14:18.112 11:28:23 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67860 00:14:18.112 11:28:23 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67860 00:14:21.434 RPC TIMEOUT SETTING TEST PASSED. 00:14:21.434 11:28:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:14:21.434 00:14:21.434 real 0m6.013s 00:14:21.434 user 0m11.679s 00:14:21.434 sys 0m0.863s 00:14:21.434 ************************************ 00:14:21.434 END TEST nvme_rpc_timeouts 00:14:21.434 ************************************ 00:14:21.434 11:28:26 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.434 11:28:26 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:14:21.434 11:28:26 -- spdk/autotest.sh@239 -- # uname -s 00:14:21.434 11:28:26 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:14:21.434 11:28:26 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:14:21.434 11:28:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:21.434 11:28:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.434 11:28:26 -- common/autotest_common.sh@10 -- # set +x 00:14:21.434 ************************************ 00:14:21.434 START TEST sw_hotplug 00:14:21.434 ************************************ 00:14:21.434 11:28:26 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:14:21.434 * Looking for test storage... 00:14:21.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:21.434 11:28:26 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:21.434 11:28:26 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:14:21.434 11:28:26 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:21.434 11:28:26 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:21.434 11:28:26 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:14:21.434 11:28:26 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:21.434 11:28:26 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:21.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.434 --rc genhtml_branch_coverage=1 00:14:21.434 --rc genhtml_function_coverage=1 00:14:21.434 --rc genhtml_legend=1 00:14:21.434 --rc geninfo_all_blocks=1 00:14:21.434 --rc geninfo_unexecuted_blocks=1 00:14:21.434 00:14:21.434 ' 00:14:21.434 11:28:26 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:21.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.435 --rc genhtml_branch_coverage=1 00:14:21.435 --rc genhtml_function_coverage=1 00:14:21.435 --rc genhtml_legend=1 00:14:21.435 --rc geninfo_all_blocks=1 00:14:21.435 --rc geninfo_unexecuted_blocks=1 00:14:21.435 00:14:21.435 ' 00:14:21.435 11:28:26 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:21.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.435 --rc genhtml_branch_coverage=1 00:14:21.435 --rc genhtml_function_coverage=1 00:14:21.435 --rc genhtml_legend=1 00:14:21.435 --rc geninfo_all_blocks=1 00:14:21.435 --rc geninfo_unexecuted_blocks=1 00:14:21.435 00:14:21.435 ' 00:14:21.435 11:28:26 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:21.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.435 --rc genhtml_branch_coverage=1 00:14:21.435 --rc genhtml_function_coverage=1 00:14:21.435 --rc genhtml_legend=1 00:14:21.435 --rc geninfo_all_blocks=1 00:14:21.435 --rc geninfo_unexecuted_blocks=1 00:14:21.435 00:14:21.435 ' 00:14:21.435 11:28:26 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:21.435 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:21.693 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:21.693 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:21.693 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:21.693 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:21.693 11:28:27 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:14:21.693 11:28:27 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:14:21.693 11:28:27 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:14:21.693 11:28:27 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@233 -- # local class 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:14:21.693 11:28:27 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:21.693 11:28:27 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:14:21.693 11:28:27 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:14:21.693 11:28:27 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:22.265 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:22.265 Waiting for block devices as requested 00:14:22.265 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:22.523 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:22.523 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:22.781 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:28.129 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:28.129 11:28:33 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:14:28.129 11:28:33 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:28.129 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:14:28.388 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:28.388 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:14:28.647 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:14:28.905 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:28.905 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:28.905 11:28:34 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:14:28.905 11:28:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:29.164 11:28:34 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:14:29.164 11:28:34 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:14:29.164 11:28:34 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68746 00:14:29.164 11:28:34 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:14:29.164 11:28:34 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:29.164 11:28:34 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:14:29.164 11:28:34 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:14:29.164 11:28:34 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:29.164 11:28:34 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:29.164 11:28:34 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:29.164 11:28:34 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:29.164 11:28:34 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:14:29.164 11:28:34 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:29.164 11:28:34 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:29.164 11:28:34 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:14:29.164 11:28:34 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:29.164 11:28:34 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:29.423 Initializing NVMe Controllers 00:14:29.423 Attaching to 0000:00:10.0 00:14:29.423 Attaching to 0000:00:11.0 00:14:29.423 Attached to 0000:00:10.0 00:14:29.423 Attached to 0000:00:11.0 00:14:29.423 Initialization complete. Starting I/O... 00:14:29.423 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:14:29.423 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:14:29.423 00:14:30.359 QEMU NVMe Ctrl (12340 ): 1104 I/Os completed (+1104) 00:14:30.359 QEMU NVMe Ctrl (12341 ): 1209 I/Os completed (+1209) 00:14:30.359 00:14:31.296 QEMU NVMe Ctrl (12340 ): 2509 I/Os completed (+1405) 00:14:31.296 QEMU NVMe Ctrl (12341 ): 2620 I/Os completed (+1411) 00:14:31.296 00:14:32.675 QEMU NVMe Ctrl (12340 ): 4109 I/Os completed (+1600) 00:14:32.675 QEMU NVMe Ctrl (12341 ): 4252 I/Os completed (+1632) 00:14:32.675 00:14:33.336 QEMU NVMe Ctrl (12340 ): 5589 I/Os completed (+1480) 00:14:33.336 QEMU NVMe Ctrl (12341 ): 5796 I/Os completed (+1544) 00:14:33.336 00:14:34.273 QEMU NVMe Ctrl (12340 ): 7133 I/Os completed (+1544) 00:14:34.273 QEMU NVMe Ctrl (12341 ): 7372 I/Os completed (+1576) 00:14:34.273 00:14:35.207 11:28:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:35.208 11:28:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:35.208 11:28:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:35.208 [2024-11-20 11:28:40.763592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:35.208 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:35.208 [2024-11-20 11:28:40.765774] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.208 [2024-11-20 11:28:40.765848] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.208 [2024-11-20 11:28:40.765877] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.208 [2024-11-20 11:28:40.765905] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.208 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:35.208 [2024-11-20 11:28:40.769391] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.208 [2024-11-20 11:28:40.769528] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.208 [2024-11-20 11:28:40.769566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.208 [2024-11-20 11:28:40.769600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.208 11:28:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:35.208 11:28:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:35.208 [2024-11-20 11:28:40.801315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:35.208 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:35.208 [2024-11-20 11:28:40.803275] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.208 [2024-11-20 11:28:40.803338] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.208 [2024-11-20 11:28:40.803371] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.208 [2024-11-20 11:28:40.803397] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.208 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:35.208 [2024-11-20 11:28:40.806574] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.208 [2024-11-20 11:28:40.806635] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.208 [2024-11-20 11:28:40.806662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.208 [2024-11-20 11:28:40.806682] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.208 11:28:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:35.208 11:28:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:35.208 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:35.208 EAL: Scan for (pci) bus failed. 00:14:35.208 11:28:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:35.208 11:28:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:35.208 11:28:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:35.466 11:28:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:35.466 11:28:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:35.466 11:28:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:35.466 11:28:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:35.466 11:28:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:35.466 Attaching to 0000:00:10.0 00:14:35.466 Attached to 0000:00:10.0 00:14:35.466 QEMU NVMe Ctrl (12340 ): 4 I/Os completed (+4) 00:14:35.466 00:14:35.466 11:28:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:35.466 11:28:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:35.466 11:28:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:35.466 Attaching to 0000:00:11.0 00:14:35.466 Attached to 0000:00:11.0 00:14:36.402 QEMU NVMe Ctrl (12340 ): 1600 I/Os completed (+1596) 00:14:36.403 QEMU NVMe Ctrl (12341 ): 1427 I/Os completed (+1427) 00:14:36.403 00:14:37.339 QEMU NVMe Ctrl (12340 ): 3004 I/Os completed (+1404) 00:14:37.339 QEMU NVMe Ctrl (12341 ): 2853 I/Os completed (+1426) 00:14:37.339 00:14:38.711 QEMU NVMe Ctrl (12340 ): 4435 I/Os completed (+1431) 00:14:38.711 QEMU NVMe Ctrl (12341 ): 4479 I/Os completed (+1626) 00:14:38.711 00:14:39.278 QEMU NVMe Ctrl (12340 ): 5899 I/Os completed (+1464) 00:14:39.278 QEMU NVMe Ctrl (12341 ): 6051 I/Os completed (+1572) 00:14:39.278 00:14:40.652 QEMU NVMe Ctrl (12340 ): 7301 I/Os completed (+1402) 00:14:40.652 QEMU NVMe Ctrl (12341 ): 7711 I/Os completed (+1660) 00:14:40.652 00:14:41.588 QEMU NVMe Ctrl (12340 ): 9021 I/Os completed (+1720) 00:14:41.588 QEMU NVMe Ctrl (12341 ): 9451 I/Os completed (+1740) 00:14:41.588 00:14:42.523 QEMU NVMe Ctrl (12340 ): 10701 I/Os completed (+1680) 00:14:42.523 QEMU NVMe Ctrl (12341 ): 11132 I/Os completed (+1681) 00:14:42.523 00:14:43.458 QEMU NVMe Ctrl (12340 ): 12104 I/Os completed (+1403) 00:14:43.458 QEMU NVMe Ctrl (12341 ): 12542 I/Os completed (+1410) 00:14:43.458 00:14:44.392 QEMU NVMe Ctrl (12340 ): 13780 I/Os completed (+1676) 00:14:44.392 QEMU NVMe Ctrl (12341 ): 14230 I/Os completed (+1688) 00:14:44.392 00:14:45.328 QEMU NVMe Ctrl (12340 ): 15408 I/Os completed (+1628) 00:14:45.328 QEMU NVMe Ctrl (12341 ): 15879 I/Os completed (+1649) 00:14:45.328 00:14:46.705 QEMU NVMe Ctrl (12340 ): 16667 I/Os completed (+1259) 00:14:46.705 QEMU NVMe Ctrl (12341 ): 17234 I/Os completed (+1355) 00:14:46.705 00:14:47.659 QEMU NVMe Ctrl (12340 ): 18269 I/Os completed (+1602) 00:14:47.659 QEMU NVMe Ctrl (12341 ): 18847 I/Os completed (+1613) 00:14:47.659 00:14:47.659 11:28:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:47.659 11:28:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:47.659 11:28:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:47.659 11:28:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:47.659 [2024-11-20 11:28:53.133741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:47.659 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:47.659 [2024-11-20 11:28:53.137670] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.659 [2024-11-20 11:28:53.137783] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.659 [2024-11-20 11:28:53.137829] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.659 [2024-11-20 11:28:53.137870] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.659 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:47.659 [2024-11-20 11:28:53.142600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.659 [2024-11-20 11:28:53.142693] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.659 [2024-11-20 11:28:53.142737] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.659 [2024-11-20 11:28:53.142773] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.659 11:28:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:47.659 11:28:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:47.659 [2024-11-20 11:28:53.172440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:47.659 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:47.659 [2024-11-20 11:28:53.175533] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.659 [2024-11-20 11:28:53.175629] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.659 [2024-11-20 11:28:53.175676] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.659 [2024-11-20 11:28:53.175714] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.659 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:47.659 [2024-11-20 11:28:53.180105] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.659 [2024-11-20 11:28:53.180172] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.659 [2024-11-20 11:28:53.180203] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.659 [2024-11-20 11:28:53.180235] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:47.659 11:28:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:47.659 11:28:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:47.659 11:28:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:47.659 11:28:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:47.659 11:28:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:47.918 11:28:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:47.918 11:28:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:47.918 11:28:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:47.918 11:28:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:47.918 11:28:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:47.918 Attaching to 0000:00:10.0 00:14:47.918 Attached to 0000:00:10.0 00:14:47.918 11:28:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:47.918 11:28:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:47.918 11:28:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:47.918 Attaching to 0000:00:11.0 00:14:47.918 Attached to 0000:00:11.0 00:14:48.496 QEMU NVMe Ctrl (12340 ): 864 I/Os completed (+864) 00:14:48.496 QEMU NVMe Ctrl (12341 ): 677 I/Os completed (+677) 00:14:48.496 00:14:49.437 QEMU NVMe Ctrl (12340 ): 2624 I/Os completed (+1760) 00:14:49.437 QEMU NVMe Ctrl (12341 ): 2454 I/Os completed (+1777) 00:14:49.437 00:14:50.371 QEMU NVMe Ctrl (12340 ): 4063 I/Os completed (+1439) 00:14:50.371 QEMU NVMe Ctrl (12341 ): 3917 I/Os completed (+1463) 00:14:50.371 00:14:51.306 QEMU NVMe Ctrl (12340 ): 5639 I/Os completed (+1576) 00:14:51.306 QEMU NVMe Ctrl (12341 ): 5514 I/Os completed (+1597) 00:14:51.306 00:14:52.680 QEMU NVMe Ctrl (12340 ): 7187 I/Os completed (+1548) 00:14:52.680 QEMU NVMe Ctrl (12341 ): 7078 I/Os completed (+1564) 00:14:52.680 00:14:53.615 QEMU NVMe Ctrl (12340 ): 8851 I/Os completed (+1664) 00:14:53.615 QEMU NVMe Ctrl (12341 ): 8743 I/Os completed (+1665) 00:14:53.615 00:14:54.548 QEMU NVMe Ctrl (12340 ): 10427 I/Os completed (+1576) 00:14:54.548 QEMU NVMe Ctrl (12341 ): 10405 I/Os completed (+1662) 00:14:54.548 00:14:55.481 QEMU NVMe Ctrl (12340 ): 11810 I/Os completed (+1383) 00:14:55.481 QEMU NVMe Ctrl (12341 ): 11900 I/Os completed (+1495) 00:14:55.481 00:14:56.414 QEMU NVMe Ctrl (12340 ): 13226 I/Os completed (+1416) 00:14:56.414 QEMU NVMe Ctrl (12341 ): 13339 I/Os completed (+1439) 00:14:56.414 00:14:57.400 QEMU NVMe Ctrl (12340 ): 14900 I/Os completed (+1674) 00:14:57.400 QEMU NVMe Ctrl (12341 ): 15244 I/Os completed (+1905) 00:14:57.400 00:14:58.335 QEMU NVMe Ctrl (12340 ): 16451 I/Os completed (+1551) 00:14:58.335 QEMU NVMe Ctrl (12341 ): 16888 I/Os completed (+1644) 00:14:58.335 00:14:59.710 QEMU NVMe Ctrl (12340 ): 17923 I/Os completed (+1472) 00:14:59.710 QEMU NVMe Ctrl (12341 ): 18331 I/Os completed (+1443) 00:14:59.710 00:14:59.968 11:29:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:59.968 11:29:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:59.968 11:29:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:59.968 11:29:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:59.968 [2024-11-20 11:29:05.587397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:59.968 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:59.968 [2024-11-20 11:29:05.592127] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.968 [2024-11-20 11:29:05.592226] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.968 [2024-11-20 11:29:05.592266] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.968 [2024-11-20 11:29:05.592310] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.968 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:59.968 [2024-11-20 11:29:05.596078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.968 [2024-11-20 11:29:05.596140] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.968 [2024-11-20 11:29:05.596162] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.968 [2024-11-20 11:29:05.596189] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.968 EAL: Cannot open sysfs resource 00:14:59.968 EAL: pci_scan_one(): cannot parse resource 00:14:59.968 EAL: Scan for (pci) bus failed. 00:14:59.968 11:29:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:59.968 11:29:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:59.969 [2024-11-20 11:29:05.622465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:59.969 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:59.969 [2024-11-20 11:29:05.624450] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.969 [2024-11-20 11:29:05.624527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.969 [2024-11-20 11:29:05.624555] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.969 [2024-11-20 11:29:05.624578] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.969 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:59.969 [2024-11-20 11:29:05.627820] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.969 [2024-11-20 11:29:05.627872] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.969 [2024-11-20 11:29:05.627903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.969 [2024-11-20 11:29:05.627924] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:59.969 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:59.969 EAL: Scan for (pci) bus failed. 00:14:59.969 11:29:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:59.969 11:29:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:00.285 11:29:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:00.285 11:29:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:00.285 11:29:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:00.285 11:29:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:00.285 11:29:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:00.285 11:29:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:00.285 11:29:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:00.285 11:29:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:00.285 Attaching to 0000:00:10.0 00:15:00.285 Attached to 0000:00:10.0 00:15:00.285 11:29:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:00.285 11:29:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:00.285 11:29:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:00.285 Attaching to 0000:00:11.0 00:15:00.285 Attached to 0000:00:11.0 00:15:00.285 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:15:00.285 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:15:00.285 [2024-11-20 11:29:06.021626] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:15:12.490 11:29:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:15:12.490 11:29:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:12.490 11:29:18 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.25 00:15:12.490 11:29:18 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.25 00:15:12.490 11:29:18 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:15:12.490 11:29:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.25 00:15:12.490 11:29:18 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.25 2 00:15:12.490 remove_attach_helper took 43.25s to complete (handling 2 nvme drive(s)) 11:29:18 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:15:19.046 11:29:24 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68746 00:15:19.046 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68746) - No such process 00:15:19.046 11:29:24 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68746 00:15:19.046 11:29:24 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:15:19.046 11:29:24 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:15:19.046 11:29:24 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:15:19.046 11:29:24 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69284 00:15:19.046 11:29:24 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:15:19.046 11:29:24 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69284 00:15:19.046 11:29:24 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:19.046 11:29:24 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69284 ']' 00:15:19.046 11:29:24 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.046 11:29:24 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:19.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.046 11:29:24 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.046 11:29:24 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:19.046 11:29:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:19.046 [2024-11-20 11:29:24.199004] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:15:19.046 [2024-11-20 11:29:24.199212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69284 ] 00:15:19.046 [2024-11-20 11:29:24.577718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.046 [2024-11-20 11:29:24.718637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.422 11:29:25 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:20.422 11:29:25 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:15:20.422 11:29:25 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:15:20.422 11:29:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.422 11:29:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:20.422 11:29:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.422 11:29:25 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:15:20.422 11:29:25 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:20.422 11:29:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:15:20.422 11:29:25 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:15:20.422 11:29:25 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:15:20.422 11:29:25 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:15:20.422 11:29:25 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:15:20.422 11:29:25 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:15:20.422 11:29:25 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:20.422 11:29:25 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:20.422 11:29:25 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:15:20.422 11:29:25 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:20.422 11:29:25 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:26.975 11:29:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:26.975 11:29:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:26.975 11:29:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:26.975 11:29:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:26.975 11:29:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:26.975 11:29:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:26.975 11:29:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:26.975 11:29:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:26.975 11:29:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:26.975 11:29:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:26.975 11:29:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:26.975 11:29:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.975 11:29:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:26.975 [2024-11-20 11:29:31.865233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:26.975 [2024-11-20 11:29:31.868309] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:26.975 [2024-11-20 11:29:31.868368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.975 [2024-11-20 11:29:31.868392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.975 [2024-11-20 11:29:31.868427] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:26.975 [2024-11-20 11:29:31.868442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.975 [2024-11-20 11:29:31.868459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.975 [2024-11-20 11:29:31.868489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:26.975 [2024-11-20 11:29:31.868508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.975 [2024-11-20 11:29:31.868522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.975 [2024-11-20 11:29:31.868546] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:26.975 [2024-11-20 11:29:31.868560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.975 [2024-11-20 11:29:31.868577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.975 11:29:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.975 11:29:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:26.975 11:29:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:26.975 [2024-11-20 11:29:32.265251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:26.975 [2024-11-20 11:29:32.268781] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:26.975 [2024-11-20 11:29:32.268838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.975 [2024-11-20 11:29:32.268863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.975 [2024-11-20 11:29:32.268894] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:26.976 [2024-11-20 11:29:32.268912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.976 [2024-11-20 11:29:32.268927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.976 [2024-11-20 11:29:32.268945] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:26.976 [2024-11-20 11:29:32.268959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.976 [2024-11-20 11:29:32.268976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.976 [2024-11-20 11:29:32.268991] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:26.976 [2024-11-20 11:29:32.269011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:26.976 [2024-11-20 11:29:32.269034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:26.976 11:29:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:26.976 11:29:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:26.976 11:29:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:26.976 11:29:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:26.976 11:29:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:26.976 11:29:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:26.976 11:29:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.976 11:29:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:26.976 11:29:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.976 11:29:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:26.976 11:29:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:26.976 11:29:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:26.976 11:29:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:26.976 11:29:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:26.976 11:29:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:26.976 11:29:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:26.976 11:29:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:26.976 11:29:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:26.976 11:29:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:27.233 11:29:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:27.233 11:29:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:27.233 11:29:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:39.443 11:29:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:39.443 11:29:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:39.443 11:29:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:39.443 11:29:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:39.443 11:29:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:39.443 11:29:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:39.443 11:29:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.443 11:29:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:39.443 11:29:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.443 11:29:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:39.443 11:29:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:39.443 11:29:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:39.443 11:29:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:39.443 11:29:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:39.443 11:29:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:39.443 11:29:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:39.443 11:29:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:39.443 11:29:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:39.443 11:29:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:39.443 11:29:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:39.443 11:29:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:39.443 11:29:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.443 11:29:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:39.443 [2024-11-20 11:29:44.965569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:39.443 [2024-11-20 11:29:44.968467] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.443 [2024-11-20 11:29:44.968529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.443 [2024-11-20 11:29:44.968549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.443 [2024-11-20 11:29:44.968583] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.443 [2024-11-20 11:29:44.968597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.443 [2024-11-20 11:29:44.968618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.443 [2024-11-20 11:29:44.968643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.443 [2024-11-20 11:29:44.968660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.443 [2024-11-20 11:29:44.968674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.443 [2024-11-20 11:29:44.968692] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.443 [2024-11-20 11:29:44.968705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.443 [2024-11-20 11:29:44.968722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.443 11:29:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.443 11:29:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:39.443 11:29:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:39.702 [2024-11-20 11:29:45.365575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:39.702 [2024-11-20 11:29:45.368842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.702 [2024-11-20 11:29:45.368898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.702 [2024-11-20 11:29:45.368930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.702 [2024-11-20 11:29:45.368960] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.702 [2024-11-20 11:29:45.368978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.702 [2024-11-20 11:29:45.368993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.702 [2024-11-20 11:29:45.369012] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.703 [2024-11-20 11:29:45.369025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.703 [2024-11-20 11:29:45.369042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.703 [2024-11-20 11:29:45.369058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:39.703 [2024-11-20 11:29:45.369074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.703 [2024-11-20 11:29:45.369088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.960 11:29:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:39.960 11:29:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:39.960 11:29:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:39.960 11:29:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:39.960 11:29:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:39.960 11:29:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.960 11:29:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:39.960 11:29:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:39.960 11:29:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.960 11:29:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:39.960 11:29:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:39.960 11:29:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:39.960 11:29:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:39.960 11:29:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:40.218 11:29:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:40.218 11:29:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:40.218 11:29:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:40.218 11:29:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:40.218 11:29:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:40.218 11:29:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:40.218 11:29:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:40.218 11:29:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:52.443 11:29:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:52.443 11:29:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:52.443 11:29:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:52.443 11:29:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:52.443 11:29:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:52.443 11:29:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:52.443 11:29:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.443 11:29:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:52.443 11:29:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.443 11:29:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:52.443 11:29:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:52.443 11:29:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:52.443 11:29:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:52.443 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:52.443 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:52.443 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:52.443 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:52.443 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:52.443 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:52.443 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:52.443 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:52.443 11:29:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.443 11:29:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:52.443 [2024-11-20 11:29:58.065882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:52.443 [2024-11-20 11:29:58.069889] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:52.443 [2024-11-20 11:29:58.069959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.443 [2024-11-20 11:29:58.069988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.443 [2024-11-20 11:29:58.070030] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:52.443 [2024-11-20 11:29:58.070050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.443 [2024-11-20 11:29:58.070125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.443 [2024-11-20 11:29:58.070147] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:52.444 [2024-11-20 11:29:58.070169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.444 [2024-11-20 11:29:58.070189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.444 [2024-11-20 11:29:58.070213] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:52.444 [2024-11-20 11:29:58.070232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.444 [2024-11-20 11:29:58.070256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.444 11:29:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.444 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:52.444 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:53.010 [2024-11-20 11:29:58.465907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:53.010 [2024-11-20 11:29:58.468873] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:53.010 [2024-11-20 11:29:58.468919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.010 [2024-11-20 11:29:58.468941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.010 [2024-11-20 11:29:58.468971] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:53.010 [2024-11-20 11:29:58.468988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.010 [2024-11-20 11:29:58.469002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.011 [2024-11-20 11:29:58.469020] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:53.011 [2024-11-20 11:29:58.469033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.011 [2024-11-20 11:29:58.469053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.011 [2024-11-20 11:29:58.469067] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:53.011 [2024-11-20 11:29:58.469083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.011 [2024-11-20 11:29:58.469097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.011 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:53.011 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:53.011 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:53.011 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:53.011 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:53.011 11:29:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.011 11:29:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:53.011 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:53.011 11:29:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.011 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:53.011 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:53.269 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:53.269 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:53.269 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:53.269 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:53.269 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:53.269 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:53.269 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:53.269 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:53.269 11:29:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:53.269 11:29:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:53.269 11:29:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:05.472 11:30:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:05.472 11:30:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:05.472 11:30:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:05.472 11:30:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:05.472 11:30:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:05.472 11:30:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.472 11:30:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:05.472 11:30:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:05.472 11:30:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.472 11:30:11 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:05.472 11:30:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:05.473 11:30:11 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.28 00:16:05.473 11:30:11 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.28 00:16:05.473 11:30:11 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:16:05.473 11:30:11 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.28 00:16:05.473 11:30:11 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.28 2 00:16:05.473 remove_attach_helper took 45.28s to complete (handling 2 nvme drive(s)) 11:30:11 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:16:05.473 11:30:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.473 11:30:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:05.473 11:30:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.473 11:30:11 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:16:05.473 11:30:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.473 11:30:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:05.473 11:30:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.473 11:30:11 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:16:05.473 11:30:11 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:16:05.473 11:30:11 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:16:05.473 11:30:11 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:16:05.473 11:30:11 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:16:05.473 11:30:11 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:16:05.473 11:30:11 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:16:05.473 11:30:11 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:16:05.473 11:30:11 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:16:05.473 11:30:11 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:16:05.473 11:30:11 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:16:05.473 11:30:11 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:16:05.473 11:30:11 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:12.124 11:30:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.124 11:30:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:12.124 [2024-11-20 11:30:17.180100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:12.124 [2024-11-20 11:30:17.183182] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.124 [2024-11-20 11:30:17.183259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.124 [2024-11-20 11:30:17.183283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.124 [2024-11-20 11:30:17.183425] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.124 [2024-11-20 11:30:17.183443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.124 [2024-11-20 11:30:17.183461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.124 [2024-11-20 11:30:17.183490] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.124 [2024-11-20 11:30:17.183508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.124 [2024-11-20 11:30:17.183522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.124 [2024-11-20 11:30:17.183541] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.124 [2024-11-20 11:30:17.183554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.124 [2024-11-20 11:30:17.183574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.124 11:30:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:12.124 [2024-11-20 11:30:17.579993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:12.124 [2024-11-20 11:30:17.582913] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.124 [2024-11-20 11:30:17.582960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.124 [2024-11-20 11:30:17.582981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.124 [2024-11-20 11:30:17.583006] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.124 [2024-11-20 11:30:17.583028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.124 [2024-11-20 11:30:17.583041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.124 [2024-11-20 11:30:17.583062] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.124 [2024-11-20 11:30:17.583074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.124 [2024-11-20 11:30:17.583090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.124 [2024-11-20 11:30:17.583104] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:12.124 [2024-11-20 11:30:17.583119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.124 [2024-11-20 11:30:17.583132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:12.124 11:30:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.124 11:30:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:12.124 11:30:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:12.124 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:12.462 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:12.462 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:12.462 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:12.462 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:12.462 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:12.462 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:12.462 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:12.462 11:30:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:12.462 11:30:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:12.462 11:30:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:12.462 11:30:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:24.707 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:24.707 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:24.707 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:24.708 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:24.708 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:24.708 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:24.708 11:30:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.708 11:30:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:24.708 11:30:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.708 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:24.708 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:24.708 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:24.708 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:24.708 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:24.708 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:24.708 [2024-11-20 11:30:30.180491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:24.708 [2024-11-20 11:30:30.182800] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.708 [2024-11-20 11:30:30.182859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.708 [2024-11-20 11:30:30.182879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.708 [2024-11-20 11:30:30.182912] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.708 [2024-11-20 11:30:30.182926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.708 [2024-11-20 11:30:30.182943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.708 [2024-11-20 11:30:30.182958] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.708 [2024-11-20 11:30:30.182974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.708 [2024-11-20 11:30:30.182988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.708 [2024-11-20 11:30:30.183005] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.708 [2024-11-20 11:30:30.183018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.708 [2024-11-20 11:30:30.183034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.708 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:24.708 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:24.708 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:24.708 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:24.708 11:30:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.708 11:30:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:24.708 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:24.708 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:24.708 11:30:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.708 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:24.708 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:24.967 [2024-11-20 11:30:30.580482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:24.967 [2024-11-20 11:30:30.583357] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.967 [2024-11-20 11:30:30.583402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.967 [2024-11-20 11:30:30.583426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.967 [2024-11-20 11:30:30.583452] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.968 [2024-11-20 11:30:30.583489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.968 [2024-11-20 11:30:30.583504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.968 [2024-11-20 11:30:30.583524] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.968 [2024-11-20 11:30:30.583537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.968 [2024-11-20 11:30:30.583554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.968 [2024-11-20 11:30:30.583570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:24.968 [2024-11-20 11:30:30.583588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.968 [2024-11-20 11:30:30.583607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.227 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:25.227 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:25.227 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:25.227 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:25.227 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:25.227 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:25.227 11:30:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.227 11:30:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:25.227 11:30:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.227 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:25.227 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:25.227 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:25.227 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:25.227 11:30:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:25.485 11:30:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:25.485 11:30:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:25.485 11:30:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:25.485 11:30:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:25.485 11:30:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:25.485 11:30:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:25.485 11:30:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:25.485 11:30:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:37.685 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:37.685 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:37.685 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:37.685 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:37.685 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:37.685 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:37.685 11:30:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.685 11:30:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:37.685 11:30:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.685 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:37.685 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:37.685 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:37.685 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:37.685 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:37.685 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:37.685 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:37.685 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:37.685 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:37.685 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:37.685 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:37.685 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:37.685 11:30:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.685 11:30:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:37.685 [2024-11-20 11:30:43.280807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:37.685 [2024-11-20 11:30:43.282811] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.685 [2024-11-20 11:30:43.282862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.685 [2024-11-20 11:30:43.282882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.685 [2024-11-20 11:30:43.282913] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.685 [2024-11-20 11:30:43.282927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.685 [2024-11-20 11:30:43.282945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.685 [2024-11-20 11:30:43.282960] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.685 [2024-11-20 11:30:43.282980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.685 [2024-11-20 11:30:43.282993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.685 [2024-11-20 11:30:43.283011] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.685 [2024-11-20 11:30:43.283025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.685 [2024-11-20 11:30:43.283042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.685 11:30:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.685 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:37.685 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:37.944 [2024-11-20 11:30:43.680846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:37.944 [2024-11-20 11:30:43.683957] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.944 [2024-11-20 11:30:43.684001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.944 [2024-11-20 11:30:43.684021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.944 [2024-11-20 11:30:43.684061] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.944 [2024-11-20 11:30:43.684077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.944 [2024-11-20 11:30:43.684090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.944 [2024-11-20 11:30:43.684108] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.944 [2024-11-20 11:30:43.684120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.944 [2024-11-20 11:30:43.684137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.944 [2024-11-20 11:30:43.684151] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:37.944 [2024-11-20 11:30:43.684169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.944 [2024-11-20 11:30:43.684182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.202 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:38.202 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:38.202 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:38.202 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:38.202 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:38.202 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:38.202 11:30:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.202 11:30:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:38.202 11:30:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.202 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:38.202 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:38.461 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:38.461 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:38.461 11:30:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:38.461 11:30:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:38.461 11:30:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:38.461 11:30:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:38.461 11:30:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:38.461 11:30:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:38.461 11:30:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:38.461 11:30:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:38.461 11:30:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:50.701 11:30:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:50.701 11:30:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:50.701 11:30:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:50.701 11:30:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:50.701 11:30:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:50.701 11:30:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.701 11:30:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:50.701 11:30:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:50.701 11:30:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.701 11:30:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:50.701 11:30:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:50.701 11:30:56 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.18 00:16:50.701 11:30:56 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.18 00:16:50.701 11:30:56 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:16:50.701 11:30:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.18 00:16:50.701 11:30:56 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.18 2 00:16:50.701 remove_attach_helper took 45.18s to complete (handling 2 nvme drive(s)) 11:30:56 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:16:50.701 11:30:56 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69284 00:16:50.701 11:30:56 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69284 ']' 00:16:50.701 11:30:56 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69284 00:16:50.701 11:30:56 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:16:50.701 11:30:56 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.701 11:30:56 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69284 00:16:50.701 11:30:56 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:50.701 11:30:56 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:50.701 11:30:56 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69284' 00:16:50.701 killing process with pid 69284 00:16:50.701 11:30:56 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69284 00:16:50.701 11:30:56 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69284 00:16:53.984 11:30:59 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:53.984 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:54.549 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:54.549 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:54.807 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:54.807 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:54.807 ************************************ 00:16:54.807 END TEST sw_hotplug 00:16:54.807 ************************************ 00:16:54.807 00:16:54.807 real 2m33.843s 00:16:54.807 user 1m52.247s 00:16:54.807 sys 0m22.026s 00:16:54.807 11:31:00 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:54.807 11:31:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:54.807 11:31:00 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:16:54.807 11:31:00 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:54.807 11:31:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:54.807 11:31:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.807 11:31:00 -- common/autotest_common.sh@10 -- # set +x 00:16:54.807 ************************************ 00:16:54.807 START TEST nvme_xnvme 00:16:54.807 ************************************ 00:16:54.807 11:31:00 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:55.068 * Looking for test storage... 00:16:55.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:55.068 11:31:00 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:55.068 11:31:00 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:16:55.068 11:31:00 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:55.068 11:31:00 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.068 11:31:00 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:55.068 11:31:00 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:55.068 11:31:00 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:55.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.068 --rc genhtml_branch_coverage=1 00:16:55.068 --rc genhtml_function_coverage=1 00:16:55.068 --rc genhtml_legend=1 00:16:55.068 --rc geninfo_all_blocks=1 00:16:55.068 --rc geninfo_unexecuted_blocks=1 00:16:55.068 00:16:55.068 ' 00:16:55.068 11:31:00 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:55.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.068 --rc genhtml_branch_coverage=1 00:16:55.068 --rc genhtml_function_coverage=1 00:16:55.068 --rc genhtml_legend=1 00:16:55.068 --rc geninfo_all_blocks=1 00:16:55.068 --rc geninfo_unexecuted_blocks=1 00:16:55.068 00:16:55.068 ' 00:16:55.068 11:31:00 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:55.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.068 --rc genhtml_branch_coverage=1 00:16:55.068 --rc genhtml_function_coverage=1 00:16:55.068 --rc genhtml_legend=1 00:16:55.068 --rc geninfo_all_blocks=1 00:16:55.068 --rc geninfo_unexecuted_blocks=1 00:16:55.068 00:16:55.068 ' 00:16:55.068 11:31:00 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:55.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.068 --rc genhtml_branch_coverage=1 00:16:55.068 --rc genhtml_function_coverage=1 00:16:55.068 --rc genhtml_legend=1 00:16:55.068 --rc geninfo_all_blocks=1 00:16:55.068 --rc geninfo_unexecuted_blocks=1 00:16:55.068 00:16:55.068 ' 00:16:55.068 11:31:00 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:16:55.068 11:31:00 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:16:55.068 11:31:00 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:55.068 11:31:00 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:16:55.068 11:31:00 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:55.069 11:31:00 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:55.069 11:31:00 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:16:55.069 11:31:00 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:16:55.069 11:31:00 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:16:55.069 11:31:00 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:16:55.069 11:31:00 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:16:55.069 11:31:00 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:55.069 11:31:00 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:55.069 11:31:00 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:16:55.069 11:31:00 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:16:55.069 11:31:00 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:16:55.069 11:31:00 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:16:55.069 11:31:00 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:16:55.069 11:31:00 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:16:55.069 11:31:00 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:55.069 11:31:00 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:55.069 11:31:00 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:55.069 11:31:00 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:55.069 11:31:00 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:55.069 11:31:00 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:55.069 11:31:00 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:16:55.069 11:31:00 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:55.069 #define SPDK_CONFIG_H 00:16:55.069 #define SPDK_CONFIG_AIO_FSDEV 1 00:16:55.069 #define SPDK_CONFIG_APPS 1 00:16:55.069 #define SPDK_CONFIG_ARCH native 00:16:55.069 #define SPDK_CONFIG_ASAN 1 00:16:55.069 #undef SPDK_CONFIG_AVAHI 00:16:55.069 #undef SPDK_CONFIG_CET 00:16:55.069 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:16:55.069 #define SPDK_CONFIG_COVERAGE 1 00:16:55.069 #define SPDK_CONFIG_CROSS_PREFIX 00:16:55.069 #undef SPDK_CONFIG_CRYPTO 00:16:55.069 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:55.069 #undef SPDK_CONFIG_CUSTOMOCF 00:16:55.069 #undef SPDK_CONFIG_DAOS 00:16:55.069 #define SPDK_CONFIG_DAOS_DIR 00:16:55.069 #define SPDK_CONFIG_DEBUG 1 00:16:55.069 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:55.069 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:16:55.069 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:55.069 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:55.069 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:55.069 #undef SPDK_CONFIG_DPDK_UADK 00:16:55.069 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:55.069 #define SPDK_CONFIG_EXAMPLES 1 00:16:55.069 #undef SPDK_CONFIG_FC 00:16:55.069 #define SPDK_CONFIG_FC_PATH 00:16:55.069 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:55.069 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:55.069 #define SPDK_CONFIG_FSDEV 1 00:16:55.069 #undef SPDK_CONFIG_FUSE 00:16:55.069 #undef SPDK_CONFIG_FUZZER 00:16:55.069 #define SPDK_CONFIG_FUZZER_LIB 00:16:55.069 #undef SPDK_CONFIG_GOLANG 00:16:55.069 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:55.070 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:55.070 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:55.070 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:16:55.070 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:55.070 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:55.070 #undef SPDK_CONFIG_HAVE_LZ4 00:16:55.070 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:16:55.070 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:16:55.070 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:55.070 #define SPDK_CONFIG_IDXD 1 00:16:55.070 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:55.070 #undef SPDK_CONFIG_IPSEC_MB 00:16:55.070 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:55.070 #define SPDK_CONFIG_ISAL 1 00:16:55.070 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:55.070 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:55.070 #define SPDK_CONFIG_LIBDIR 00:16:55.070 #undef SPDK_CONFIG_LTO 00:16:55.070 #define SPDK_CONFIG_MAX_LCORES 128 00:16:55.070 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:16:55.070 #define SPDK_CONFIG_NVME_CUSE 1 00:16:55.070 #undef SPDK_CONFIG_OCF 00:16:55.070 #define SPDK_CONFIG_OCF_PATH 00:16:55.070 #define SPDK_CONFIG_OPENSSL_PATH 00:16:55.070 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:55.070 #define SPDK_CONFIG_PGO_DIR 00:16:55.070 #undef SPDK_CONFIG_PGO_USE 00:16:55.070 #define SPDK_CONFIG_PREFIX /usr/local 00:16:55.070 #undef SPDK_CONFIG_RAID5F 00:16:55.070 #undef SPDK_CONFIG_RBD 00:16:55.070 #define SPDK_CONFIG_RDMA 1 00:16:55.070 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:55.070 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:55.070 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:55.070 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:55.070 #define SPDK_CONFIG_SHARED 1 00:16:55.070 #undef SPDK_CONFIG_SMA 00:16:55.070 #define SPDK_CONFIG_TESTS 1 00:16:55.070 #undef SPDK_CONFIG_TSAN 00:16:55.070 #define SPDK_CONFIG_UBLK 1 00:16:55.070 #define SPDK_CONFIG_UBSAN 1 00:16:55.070 #undef SPDK_CONFIG_UNIT_TESTS 00:16:55.070 #undef SPDK_CONFIG_URING 00:16:55.070 #define SPDK_CONFIG_URING_PATH 00:16:55.070 #undef SPDK_CONFIG_URING_ZNS 00:16:55.070 #undef SPDK_CONFIG_USDT 00:16:55.070 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:55.070 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:55.070 #undef SPDK_CONFIG_VFIO_USER 00:16:55.070 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:55.070 #define SPDK_CONFIG_VHOST 1 00:16:55.070 #define SPDK_CONFIG_VIRTIO 1 00:16:55.070 #undef SPDK_CONFIG_VTUNE 00:16:55.070 #define SPDK_CONFIG_VTUNE_DIR 00:16:55.070 #define SPDK_CONFIG_WERROR 1 00:16:55.070 #define SPDK_CONFIG_WPDK_DIR 00:16:55.070 #define SPDK_CONFIG_XNVME 1 00:16:55.070 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:55.070 11:31:00 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:55.070 11:31:00 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:55.070 11:31:00 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.070 11:31:00 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.070 11:31:00 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.070 11:31:00 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.070 11:31:00 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.070 11:31:00 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.070 11:31:00 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:55.070 11:31:00 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@68 -- # uname -s 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:16:55.070 11:31:00 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:16:55.070 11:31:00 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:55.071 11:31:00 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70633 ]] 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70633 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.bVwZMD 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.bVwZMD/tests/xnvme /tmp/spdk.bVwZMD 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976182784 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592162304 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261661696 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976182784 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592162304 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266277888 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=90692829184 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=9009950720 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:16:55.072 * Looking for test storage... 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:55.072 11:31:00 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13976182784 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:55.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:16:55.331 11:31:00 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:16:55.332 11:31:00 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:55.332 11:31:00 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:55.332 11:31:00 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:55.332 11:31:00 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:16:55.332 11:31:00 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:55.332 11:31:00 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:16:55.332 11:31:00 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:55.332 11:31:00 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:55.332 11:31:00 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:55.332 11:31:00 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:55.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.332 --rc genhtml_branch_coverage=1 00:16:55.332 --rc genhtml_function_coverage=1 00:16:55.332 --rc genhtml_legend=1 00:16:55.332 --rc geninfo_all_blocks=1 00:16:55.332 --rc geninfo_unexecuted_blocks=1 00:16:55.332 00:16:55.332 ' 00:16:55.332 11:31:00 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:55.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.332 --rc genhtml_branch_coverage=1 00:16:55.332 --rc genhtml_function_coverage=1 00:16:55.332 --rc genhtml_legend=1 00:16:55.332 --rc geninfo_all_blocks=1 00:16:55.332 --rc geninfo_unexecuted_blocks=1 00:16:55.332 00:16:55.332 ' 00:16:55.332 11:31:00 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:55.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.332 --rc genhtml_branch_coverage=1 00:16:55.332 --rc genhtml_function_coverage=1 00:16:55.332 --rc genhtml_legend=1 00:16:55.332 --rc geninfo_all_blocks=1 00:16:55.332 --rc geninfo_unexecuted_blocks=1 00:16:55.332 00:16:55.332 ' 00:16:55.332 11:31:00 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:55.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.332 --rc genhtml_branch_coverage=1 00:16:55.332 --rc genhtml_function_coverage=1 00:16:55.332 --rc genhtml_legend=1 00:16:55.332 --rc geninfo_all_blocks=1 00:16:55.332 --rc geninfo_unexecuted_blocks=1 00:16:55.332 00:16:55.332 ' 00:16:55.332 11:31:00 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.332 11:31:00 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.332 11:31:00 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.332 11:31:00 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.332 11:31:00 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.332 11:31:00 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:55.332 11:31:00 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:16:55.332 11:31:00 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:55.590 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:55.848 Waiting for block devices as requested 00:16:55.848 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:56.106 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:56.106 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:56.364 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:17:01.629 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:17:01.629 11:31:07 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:17:01.886 11:31:07 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:17:01.886 11:31:07 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:17:02.151 11:31:07 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:17:02.151 11:31:07 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:17:02.151 11:31:07 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:17:02.151 11:31:07 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:17:02.151 11:31:07 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:17:02.151 No valid GPT data, bailing 00:17:02.151 11:31:07 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:02.151 11:31:07 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:17:02.151 11:31:07 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:17:02.151 11:31:07 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:17:02.151 11:31:07 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:17:02.151 11:31:07 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:17:02.151 11:31:07 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:17:02.151 11:31:07 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:17:02.151 11:31:07 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:17:02.151 11:31:07 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:17:02.151 11:31:07 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:17:02.151 11:31:07 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:17:02.151 11:31:07 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:17:02.151 11:31:07 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:02.151 11:31:07 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:17:02.151 11:31:07 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:17:02.151 11:31:07 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:02.151 11:31:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:02.151 11:31:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.151 11:31:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:02.151 ************************************ 00:17:02.151 START TEST xnvme_rpc 00:17:02.151 ************************************ 00:17:02.151 11:31:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:02.151 11:31:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:02.151 11:31:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:02.151 11:31:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:02.151 11:31:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:02.151 11:31:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71027 00:17:02.151 11:31:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71027 00:17:02.151 11:31:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:02.151 11:31:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71027 ']' 00:17:02.151 11:31:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.151 11:31:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.151 11:31:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.151 11:31:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.151 11:31:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.417 [2024-11-20 11:31:07.927130] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:17:02.417 [2024-11-20 11:31:07.927386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71027 ] 00:17:02.417 [2024-11-20 11:31:08.126896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.676 [2024-11-20 11:31:08.275348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.051 xnvme_bdev 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71027 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71027 ']' 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71027 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71027 00:17:04.051 killing process with pid 71027 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71027' 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71027 00:17:04.051 11:31:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71027 00:17:07.337 00:17:07.337 real 0m4.708s 00:17:07.337 user 0m4.920s 00:17:07.337 sys 0m0.661s 00:17:07.337 ************************************ 00:17:07.337 END TEST xnvme_rpc 00:17:07.337 ************************************ 00:17:07.337 11:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:07.337 11:31:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.337 11:31:12 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:07.337 11:31:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:07.337 11:31:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:07.337 11:31:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:07.337 ************************************ 00:17:07.337 START TEST xnvme_bdevperf 00:17:07.337 ************************************ 00:17:07.337 11:31:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:07.337 11:31:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:07.337 11:31:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:17:07.337 11:31:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:07.337 11:31:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:07.337 11:31:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:07.337 11:31:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:07.337 11:31:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:07.337 { 00:17:07.337 "subsystems": [ 00:17:07.337 { 00:17:07.337 "subsystem": "bdev", 00:17:07.337 "config": [ 00:17:07.337 { 00:17:07.337 "params": { 00:17:07.337 "io_mechanism": "libaio", 00:17:07.337 "conserve_cpu": false, 00:17:07.337 "filename": "/dev/nvme0n1", 00:17:07.337 "name": "xnvme_bdev" 00:17:07.337 }, 00:17:07.337 "method": "bdev_xnvme_create" 00:17:07.337 }, 00:17:07.337 { 00:17:07.337 "method": "bdev_wait_for_examine" 00:17:07.337 } 00:17:07.337 ] 00:17:07.337 } 00:17:07.337 ] 00:17:07.337 } 00:17:07.337 [2024-11-20 11:31:12.641435] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:17:07.337 [2024-11-20 11:31:12.641635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71118 ] 00:17:07.337 [2024-11-20 11:31:12.844337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.337 [2024-11-20 11:31:13.021024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.903 Running I/O for 5 seconds... 00:17:09.774 26500.00 IOPS, 103.52 MiB/s [2024-11-20T11:31:16.913Z] 25554.50 IOPS, 99.82 MiB/s [2024-11-20T11:31:17.889Z] 26239.00 IOPS, 102.50 MiB/s [2024-11-20T11:31:18.824Z] 26671.00 IOPS, 104.18 MiB/s 00:17:13.062 Latency(us) 00:17:13.062 [2024-11-20T11:31:18.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.062 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:13.062 xnvme_bdev : 5.00 26595.76 103.89 0.00 0.00 2401.02 116.05 39196.77 00:17:13.062 [2024-11-20T11:31:18.824Z] =================================================================================================================== 00:17:13.062 [2024-11-20T11:31:18.824Z] Total : 26595.76 103.89 0.00 0.00 2401.02 116.05 39196.77 00:17:14.435 11:31:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:14.435 11:31:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:14.435 11:31:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:14.435 11:31:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:14.435 11:31:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:14.435 { 00:17:14.435 "subsystems": [ 00:17:14.435 { 00:17:14.435 "subsystem": "bdev", 00:17:14.435 "config": [ 00:17:14.435 { 00:17:14.435 "params": { 00:17:14.435 "io_mechanism": "libaio", 00:17:14.435 "conserve_cpu": false, 00:17:14.435 "filename": "/dev/nvme0n1", 00:17:14.435 "name": "xnvme_bdev" 00:17:14.435 }, 00:17:14.435 "method": "bdev_xnvme_create" 00:17:14.435 }, 00:17:14.435 { 00:17:14.435 "method": "bdev_wait_for_examine" 00:17:14.435 } 00:17:14.435 ] 00:17:14.435 } 00:17:14.435 ] 00:17:14.435 } 00:17:14.435 [2024-11-20 11:31:20.089602] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:17:14.435 [2024-11-20 11:31:20.090136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71204 ] 00:17:14.693 [2024-11-20 11:31:20.310066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.041 [2024-11-20 11:31:20.490448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.299 Running I/O for 5 seconds... 00:17:17.606 23436.00 IOPS, 91.55 MiB/s [2024-11-20T11:31:24.301Z] 24405.50 IOPS, 95.33 MiB/s [2024-11-20T11:31:25.235Z] 25753.00 IOPS, 100.60 MiB/s [2024-11-20T11:31:26.168Z] 25950.00 IOPS, 101.37 MiB/s [2024-11-20T11:31:26.168Z] 26554.80 IOPS, 103.73 MiB/s 00:17:20.406 Latency(us) 00:17:20.406 [2024-11-20T11:31:26.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.406 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:20.406 xnvme_bdev : 5.01 26532.12 103.64 0.00 0.00 2406.77 475.92 6553.60 00:17:20.406 [2024-11-20T11:31:26.168Z] =================================================================================================================== 00:17:20.406 [2024-11-20T11:31:26.168Z] Total : 26532.12 103.64 0.00 0.00 2406.77 475.92 6553.60 00:17:21.778 ************************************ 00:17:21.778 END TEST xnvme_bdevperf 00:17:21.778 ************************************ 00:17:21.778 00:17:21.778 real 0m15.018s 00:17:21.778 user 0m6.374s 00:17:21.778 sys 0m5.991s 00:17:21.778 11:31:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.778 11:31:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:22.036 11:31:27 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:22.036 11:31:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:22.036 11:31:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.036 11:31:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:22.036 ************************************ 00:17:22.036 START TEST xnvme_fio_plugin 00:17:22.036 ************************************ 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:22.036 11:31:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:22.036 { 00:17:22.036 "subsystems": [ 00:17:22.036 { 00:17:22.036 "subsystem": "bdev", 00:17:22.036 "config": [ 00:17:22.036 { 00:17:22.036 "params": { 00:17:22.036 "io_mechanism": "libaio", 00:17:22.036 "conserve_cpu": false, 00:17:22.036 "filename": "/dev/nvme0n1", 00:17:22.036 "name": "xnvme_bdev" 00:17:22.036 }, 00:17:22.036 "method": "bdev_xnvme_create" 00:17:22.036 }, 00:17:22.036 { 00:17:22.036 "method": "bdev_wait_for_examine" 00:17:22.036 } 00:17:22.036 ] 00:17:22.036 } 00:17:22.036 ] 00:17:22.036 } 00:17:22.293 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:22.293 fio-3.35 00:17:22.293 Starting 1 thread 00:17:28.867 00:17:28.867 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71335: Wed Nov 20 11:31:33 2024 00:17:28.867 read: IOPS=31.7k, BW=124MiB/s (130MB/s)(619MiB/5001msec) 00:17:28.867 slat (usec): min=5, max=3535, avg=26.75, stdev=36.76 00:17:28.867 clat (usec): min=8, max=24656, avg=1232.00, stdev=846.82 00:17:28.867 lat (usec): min=51, max=24662, avg=1258.75, stdev=848.94 00:17:28.867 clat percentiles (usec): 00:17:28.867 | 1.00th=[ 204], 5.00th=[ 330], 10.00th=[ 429], 20.00th=[ 603], 00:17:28.867 | 30.00th=[ 758], 40.00th=[ 906], 50.00th=[ 1057], 60.00th=[ 1221], 00:17:28.867 | 70.00th=[ 1418], 80.00th=[ 1729], 90.00th=[ 2245], 95.00th=[ 2737], 00:17:28.867 | 99.00th=[ 3982], 99.50th=[ 4490], 99.90th=[ 5932], 99.95th=[ 8029], 00:17:28.867 | 99.99th=[18744] 00:17:28.867 bw ( KiB/s): min=110296, max=143472, per=100.00%, avg=128783.67, stdev=12437.19, samples=9 00:17:28.867 iops : min=27574, max=35868, avg=32195.89, stdev=3109.28, samples=9 00:17:28.867 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=0.07%, 250=2.04% 00:17:28.867 lat (usec) : 500=11.83%, 750=15.59%, 1000=17.19% 00:17:28.867 lat (msec) : 2=39.45%, 4=12.86%, 10=0.91%, 20=0.03%, 50=0.01% 00:17:28.867 cpu : usr=26.58%, sys=50.98%, ctx=77, majf=0, minf=764 00:17:28.867 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=9.8%, 16=24.0%, 32=59.3%, >=64=2.1% 00:17:28.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.867 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:17:28.867 issued rwts: total=158437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:28.867 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:28.867 00:17:28.867 Run status group 0 (all jobs): 00:17:28.867 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=619MiB (649MB), run=5001-5001msec 00:17:29.801 ----------------------------------------------------- 00:17:29.801 Suppressions used: 00:17:29.801 count bytes template 00:17:29.801 1 11 /usr/src/fio/parse.c 00:17:29.801 1 8 libtcmalloc_minimal.so 00:17:29.801 1 904 libcrypto.so 00:17:29.801 ----------------------------------------------------- 00:17:29.801 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:30.059 11:31:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:30.059 { 00:17:30.059 "subsystems": [ 00:17:30.059 { 00:17:30.059 "subsystem": "bdev", 00:17:30.059 "config": [ 00:17:30.059 { 00:17:30.059 "params": { 00:17:30.059 "io_mechanism": "libaio", 00:17:30.059 "conserve_cpu": false, 00:17:30.059 "filename": "/dev/nvme0n1", 00:17:30.059 "name": "xnvme_bdev" 00:17:30.059 }, 00:17:30.059 "method": "bdev_xnvme_create" 00:17:30.059 }, 00:17:30.059 { 00:17:30.059 "method": "bdev_wait_for_examine" 00:17:30.059 } 00:17:30.059 ] 00:17:30.059 } 00:17:30.059 ] 00:17:30.059 } 00:17:30.317 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:30.317 fio-3.35 00:17:30.317 Starting 1 thread 00:17:36.984 00:17:36.984 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71438: Wed Nov 20 11:31:41 2024 00:17:36.984 write: IOPS=29.8k, BW=116MiB/s (122MB/s)(583MiB/5001msec); 0 zone resets 00:17:36.984 slat (usec): min=5, max=5020, avg=29.12, stdev=42.90 00:17:36.984 clat (usec): min=94, max=8234, avg=1262.10, stdev=708.67 00:17:36.984 lat (usec): min=180, max=8277, avg=1291.22, stdev=711.27 00:17:36.984 clat percentiles (usec): 00:17:36.984 | 1.00th=[ 253], 5.00th=[ 375], 10.00th=[ 486], 20.00th=[ 676], 00:17:36.984 | 30.00th=[ 832], 40.00th=[ 988], 50.00th=[ 1139], 60.00th=[ 1319], 00:17:36.984 | 70.00th=[ 1516], 80.00th=[ 1762], 90.00th=[ 2147], 95.00th=[ 2507], 00:17:36.984 | 99.00th=[ 3654], 99.50th=[ 4113], 99.90th=[ 5080], 99.95th=[ 5538], 00:17:36.984 | 99.99th=[ 7701] 00:17:36.984 bw ( KiB/s): min=106824, max=136160, per=100.00%, avg=120580.33, stdev=10587.12, samples=9 00:17:36.984 iops : min=26706, max=34040, avg=30145.00, stdev=2646.83, samples=9 00:17:36.984 lat (usec) : 100=0.01%, 250=0.93%, 500=9.78%, 750=14.21%, 1000=16.18% 00:17:36.984 lat (msec) : 2=46.01%, 4=12.29%, 10=0.60% 00:17:36.984 cpu : usr=25.28%, sys=54.20%, ctx=117, majf=0, minf=764 00:17:36.984 IO depths : 1=0.1%, 2=1.2%, 4=4.2%, 8=10.7%, 16=24.9%, 32=57.1%, >=64=1.9% 00:17:36.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.984 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:17:36.984 issued rwts: total=0,149140,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:36.984 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:36.984 00:17:36.984 Run status group 0 (all jobs): 00:17:36.984 WRITE: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=583MiB (611MB), run=5001-5001msec 00:17:37.920 ----------------------------------------------------- 00:17:37.920 Suppressions used: 00:17:37.920 count bytes template 00:17:37.920 1 11 /usr/src/fio/parse.c 00:17:37.920 1 8 libtcmalloc_minimal.so 00:17:37.920 1 904 libcrypto.so 00:17:37.920 ----------------------------------------------------- 00:17:37.920 00:17:37.920 00:17:37.920 real 0m15.960s 00:17:37.920 user 0m7.373s 00:17:37.920 sys 0m6.071s 00:17:37.920 11:31:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.920 11:31:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:37.920 ************************************ 00:17:37.920 END TEST xnvme_fio_plugin 00:17:37.920 ************************************ 00:17:37.920 11:31:43 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:37.920 11:31:43 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:17:37.920 11:31:43 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:17:37.920 11:31:43 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:37.920 11:31:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:37.920 11:31:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.920 11:31:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:37.920 ************************************ 00:17:37.920 START TEST xnvme_rpc 00:17:37.920 ************************************ 00:17:37.920 11:31:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:37.920 11:31:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:37.920 11:31:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:37.920 11:31:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:37.920 11:31:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:37.920 11:31:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71524 00:17:37.920 11:31:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:37.920 11:31:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71524 00:17:37.920 11:31:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71524 ']' 00:17:37.920 11:31:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.920 11:31:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:37.920 11:31:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.920 11:31:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:37.920 11:31:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:38.179 [2024-11-20 11:31:43.739273] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:17:38.179 [2024-11-20 11:31:43.739704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71524 ] 00:17:38.179 [2024-11-20 11:31:43.931164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.437 [2024-11-20 11:31:44.074512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.396 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.396 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:39.396 11:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:17:39.396 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.396 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.396 xnvme_bdev 00:17:39.396 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.396 11:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:39.396 11:31:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:39.396 11:31:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:39.396 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.396 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.396 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71524 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71524 ']' 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71524 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71524 00:17:39.654 killing process with pid 71524 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71524' 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71524 00:17:39.654 11:31:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71524 00:17:42.938 00:17:42.938 real 0m4.633s 00:17:42.938 user 0m4.818s 00:17:42.938 sys 0m0.592s 00:17:42.938 11:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:42.938 ************************************ 00:17:42.938 11:31:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.938 END TEST xnvme_rpc 00:17:42.938 ************************************ 00:17:42.938 11:31:48 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:42.938 11:31:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:42.938 11:31:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:42.938 11:31:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:42.938 ************************************ 00:17:42.938 START TEST xnvme_bdevperf 00:17:42.938 ************************************ 00:17:42.938 11:31:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:42.938 11:31:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:42.938 11:31:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:17:42.938 11:31:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:42.938 11:31:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:42.938 11:31:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:42.938 11:31:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:42.938 11:31:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:42.938 { 00:17:42.938 "subsystems": [ 00:17:42.938 { 00:17:42.938 "subsystem": "bdev", 00:17:42.938 "config": [ 00:17:42.938 { 00:17:42.938 "params": { 00:17:42.938 "io_mechanism": "libaio", 00:17:42.938 "conserve_cpu": true, 00:17:42.938 "filename": "/dev/nvme0n1", 00:17:42.938 "name": "xnvme_bdev" 00:17:42.938 }, 00:17:42.938 "method": "bdev_xnvme_create" 00:17:42.938 }, 00:17:42.938 { 00:17:42.938 "method": "bdev_wait_for_examine" 00:17:42.938 } 00:17:42.938 ] 00:17:42.938 } 00:17:42.938 ] 00:17:42.938 } 00:17:42.938 [2024-11-20 11:31:48.385843] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:17:42.938 [2024-11-20 11:31:48.386023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71617 ] 00:17:42.938 [2024-11-20 11:31:48.569632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.196 [2024-11-20 11:31:48.708053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.454 Running I/O for 5 seconds... 00:17:45.763 30501.00 IOPS, 119.14 MiB/s [2024-11-20T11:31:52.460Z] 28845.00 IOPS, 112.68 MiB/s [2024-11-20T11:31:53.395Z] 27441.33 IOPS, 107.19 MiB/s [2024-11-20T11:31:54.328Z] 26709.25 IOPS, 104.33 MiB/s [2024-11-20T11:31:54.328Z] 27109.40 IOPS, 105.90 MiB/s 00:17:48.566 Latency(us) 00:17:48.566 [2024-11-20T11:31:54.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.566 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:48.566 xnvme_bdev : 5.01 27087.78 105.81 0.00 0.00 2357.60 483.72 6803.26 00:17:48.566 [2024-11-20T11:31:54.328Z] =================================================================================================================== 00:17:48.566 [2024-11-20T11:31:54.328Z] Total : 27087.78 105.81 0.00 0.00 2357.60 483.72 6803.26 00:17:49.942 11:31:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:49.942 11:31:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:49.942 11:31:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:49.942 11:31:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:49.942 11:31:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:49.942 { 00:17:49.942 "subsystems": [ 00:17:49.942 { 00:17:49.942 "subsystem": "bdev", 00:17:49.942 "config": [ 00:17:49.942 { 00:17:49.942 "params": { 00:17:49.942 "io_mechanism": "libaio", 00:17:49.942 "conserve_cpu": true, 00:17:49.942 "filename": "/dev/nvme0n1", 00:17:49.942 "name": "xnvme_bdev" 00:17:49.942 }, 00:17:49.942 "method": "bdev_xnvme_create" 00:17:49.942 }, 00:17:49.942 { 00:17:49.942 "method": "bdev_wait_for_examine" 00:17:49.942 } 00:17:49.942 ] 00:17:49.942 } 00:17:49.942 ] 00:17:49.942 } 00:17:49.942 [2024-11-20 11:31:55.614388] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:17:49.942 [2024-11-20 11:31:55.614735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71702 ] 00:17:50.201 [2024-11-20 11:31:55.791800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.201 [2024-11-20 11:31:55.932379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.767 Running I/O for 5 seconds... 00:17:52.637 26740.00 IOPS, 104.45 MiB/s [2024-11-20T11:31:59.773Z] 31027.50 IOPS, 121.20 MiB/s [2024-11-20T11:32:00.707Z] 30082.67 IOPS, 117.51 MiB/s [2024-11-20T11:32:01.641Z] 29509.25 IOPS, 115.27 MiB/s 00:17:55.879 Latency(us) 00:17:55.879 [2024-11-20T11:32:01.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.879 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:55.879 xnvme_bdev : 5.00 29035.62 113.42 0.00 0.00 2198.99 89.23 20846.69 00:17:55.879 [2024-11-20T11:32:01.641Z] =================================================================================================================== 00:17:55.879 [2024-11-20T11:32:01.641Z] Total : 29035.62 113.42 0.00 0.00 2198.99 89.23 20846.69 00:17:57.255 00:17:57.255 real 0m14.447s 00:17:57.255 user 0m5.840s 00:17:57.255 sys 0m5.855s 00:17:57.255 11:32:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.255 11:32:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:57.255 ************************************ 00:17:57.255 END TEST xnvme_bdevperf 00:17:57.255 ************************************ 00:17:57.255 11:32:02 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:57.255 11:32:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:57.255 11:32:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.255 11:32:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:57.255 ************************************ 00:17:57.255 START TEST xnvme_fio_plugin 00:17:57.255 ************************************ 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:57.255 11:32:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:57.255 { 00:17:57.255 "subsystems": [ 00:17:57.255 { 00:17:57.255 "subsystem": "bdev", 00:17:57.255 "config": [ 00:17:57.255 { 00:17:57.255 "params": { 00:17:57.255 "io_mechanism": "libaio", 00:17:57.255 "conserve_cpu": true, 00:17:57.255 "filename": "/dev/nvme0n1", 00:17:57.255 "name": "xnvme_bdev" 00:17:57.255 }, 00:17:57.255 "method": "bdev_xnvme_create" 00:17:57.255 }, 00:17:57.255 { 00:17:57.255 "method": "bdev_wait_for_examine" 00:17:57.255 } 00:17:57.255 ] 00:17:57.255 } 00:17:57.255 ] 00:17:57.255 } 00:17:57.514 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:57.514 fio-3.35 00:17:57.514 Starting 1 thread 00:18:04.077 00:18:04.077 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71828: Wed Nov 20 11:32:08 2024 00:18:04.077 read: IOPS=31.2k, BW=122MiB/s (128MB/s)(609MiB/5001msec) 00:18:04.077 slat (usec): min=5, max=2688, avg=28.38, stdev=30.15 00:18:04.077 clat (usec): min=103, max=5675, avg=1171.66, stdev=682.10 00:18:04.077 lat (usec): min=158, max=5709, avg=1200.04, stdev=685.84 00:18:04.077 clat percentiles (usec): 00:18:04.077 | 1.00th=[ 221], 5.00th=[ 326], 10.00th=[ 420], 20.00th=[ 586], 00:18:04.077 | 30.00th=[ 734], 40.00th=[ 873], 50.00th=[ 1029], 60.00th=[ 1205], 00:18:04.077 | 70.00th=[ 1418], 80.00th=[ 1713], 90.00th=[ 2114], 95.00th=[ 2442], 00:18:04.077 | 99.00th=[ 3261], 99.50th=[ 3621], 99.90th=[ 4490], 99.95th=[ 4621], 00:18:04.077 | 99.99th=[ 5080] 00:18:04.077 bw ( KiB/s): min=96904, max=164128, per=100.00%, avg=126951.89, stdev=19725.68, samples=9 00:18:04.077 iops : min=24226, max=41032, avg=31737.89, stdev=4931.47, samples=9 00:18:04.077 lat (usec) : 250=1.90%, 500=12.61%, 750=16.82%, 1000=16.81% 00:18:04.077 lat (msec) : 2=39.35%, 4=12.24%, 10=0.28% 00:18:04.077 cpu : usr=23.44%, sys=53.82%, ctx=141, majf=0, minf=764 00:18:04.077 IO depths : 1=0.1%, 2=1.4%, 4=4.7%, 8=11.3%, 16=25.4%, 32=55.3%, >=64=1.8% 00:18:04.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.077 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:18:04.077 issued rwts: total=155924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.077 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:04.077 00:18:04.077 Run status group 0 (all jobs): 00:18:04.077 READ: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=609MiB (639MB), run=5001-5001msec 00:18:04.644 ----------------------------------------------------- 00:18:04.644 Suppressions used: 00:18:04.644 count bytes template 00:18:04.644 1 11 /usr/src/fio/parse.c 00:18:04.644 1 8 libtcmalloc_minimal.so 00:18:04.644 1 904 libcrypto.so 00:18:04.644 ----------------------------------------------------- 00:18:04.644 00:18:04.644 11:32:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:04.644 11:32:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:04.644 11:32:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:04.644 11:32:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:04.644 11:32:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:04.644 11:32:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:04.644 11:32:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:04.644 11:32:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:04.644 11:32:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:04.644 11:32:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:04.644 11:32:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:04.644 11:32:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:04.644 11:32:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:04.644 11:32:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:04.644 11:32:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:04.644 11:32:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:04.903 11:32:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:04.903 11:32:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:04.903 11:32:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:04.903 11:32:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:04.903 11:32:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:04.903 { 00:18:04.903 "subsystems": [ 00:18:04.903 { 00:18:04.903 "subsystem": "bdev", 00:18:04.903 "config": [ 00:18:04.903 { 00:18:04.903 "params": { 00:18:04.903 "io_mechanism": "libaio", 00:18:04.903 "conserve_cpu": true, 00:18:04.903 "filename": "/dev/nvme0n1", 00:18:04.903 "name": "xnvme_bdev" 00:18:04.903 }, 00:18:04.903 "method": "bdev_xnvme_create" 00:18:04.903 }, 00:18:04.903 { 00:18:04.903 "method": "bdev_wait_for_examine" 00:18:04.903 } 00:18:04.904 ] 00:18:04.904 } 00:18:04.904 ] 00:18:04.904 } 00:18:04.904 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:04.904 fio-3.35 00:18:04.904 Starting 1 thread 00:18:11.468 00:18:11.468 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71930: Wed Nov 20 11:32:16 2024 00:18:11.468 write: IOPS=32.1k, BW=125MiB/s (131MB/s)(626MiB/5001msec); 0 zone resets 00:18:11.468 slat (usec): min=4, max=3618, avg=27.19, stdev=34.78 00:18:11.468 clat (usec): min=56, max=6182, avg=1163.94, stdev=664.32 00:18:11.468 lat (usec): min=146, max=6270, avg=1191.14, stdev=667.31 00:18:11.468 clat percentiles (usec): 00:18:11.468 | 1.00th=[ 225], 5.00th=[ 334], 10.00th=[ 437], 20.00th=[ 611], 00:18:11.468 | 30.00th=[ 766], 40.00th=[ 898], 50.00th=[ 1037], 60.00th=[ 1188], 00:18:11.468 | 70.00th=[ 1369], 80.00th=[ 1631], 90.00th=[ 2073], 95.00th=[ 2409], 00:18:11.468 | 99.00th=[ 3294], 99.50th=[ 3752], 99.90th=[ 4621], 99.95th=[ 4883], 00:18:11.468 | 99.99th=[ 5276] 00:18:11.468 bw ( KiB/s): min=102696, max=160248, per=99.04%, avg=127043.89, stdev=20463.16, samples=9 00:18:11.468 iops : min=25674, max=40062, avg=31760.89, stdev=5115.83, samples=9 00:18:11.468 lat (usec) : 100=0.01%, 250=1.70%, 500=11.70%, 750=15.67%, 1000=18.39% 00:18:11.468 lat (msec) : 2=41.30%, 4=10.88%, 10=0.36% 00:18:11.468 cpu : usr=24.74%, sys=53.62%, ctx=174, majf=0, minf=764 00:18:11.468 IO depths : 1=0.2%, 2=1.4%, 4=4.5%, 8=10.9%, 16=25.2%, 32=56.1%, >=64=1.8% 00:18:11.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:11.468 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:18:11.468 issued rwts: total=0,160381,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:11.468 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:11.468 00:18:11.468 Run status group 0 (all jobs): 00:18:11.468 WRITE: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=626MiB (657MB), run=5001-5001msec 00:18:12.404 ----------------------------------------------------- 00:18:12.404 Suppressions used: 00:18:12.404 count bytes template 00:18:12.404 1 11 /usr/src/fio/parse.c 00:18:12.404 1 8 libtcmalloc_minimal.so 00:18:12.404 1 904 libcrypto.so 00:18:12.404 ----------------------------------------------------- 00:18:12.404 00:18:12.404 00:18:12.404 real 0m15.326s 00:18:12.404 user 0m6.578s 00:18:12.404 sys 0m6.169s 00:18:12.404 11:32:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:12.404 11:32:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:12.404 ************************************ 00:18:12.404 END TEST xnvme_fio_plugin 00:18:12.404 ************************************ 00:18:12.404 11:32:18 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:18:12.404 11:32:18 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:18:12.404 11:32:18 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:18:12.404 11:32:18 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:18:12.404 11:32:18 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:18:12.404 11:32:18 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:12.405 11:32:18 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:18:12.405 11:32:18 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:18:12.405 11:32:18 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:12.405 11:32:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:12.405 11:32:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:12.405 11:32:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:12.662 ************************************ 00:18:12.662 START TEST xnvme_rpc 00:18:12.662 ************************************ 00:18:12.662 11:32:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:12.662 11:32:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:12.663 11:32:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:12.663 11:32:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:12.663 11:32:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:12.663 11:32:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72012 00:18:12.663 11:32:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72012 00:18:12.663 11:32:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:12.663 11:32:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72012 ']' 00:18:12.663 11:32:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.663 11:32:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.663 11:32:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.663 11:32:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.663 11:32:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:12.663 [2024-11-20 11:32:18.319866] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:18:12.663 [2024-11-20 11:32:18.320325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72012 ] 00:18:12.921 [2024-11-20 11:32:18.517450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.921 [2024-11-20 11:32:18.659935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.301 xnvme_bdev 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72012 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72012 ']' 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72012 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72012 00:18:14.301 killing process with pid 72012 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72012' 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72012 00:18:14.301 11:32:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72012 00:18:17.589 ************************************ 00:18:17.589 END TEST xnvme_rpc 00:18:17.589 ************************************ 00:18:17.589 00:18:17.589 real 0m4.526s 00:18:17.589 user 0m4.616s 00:18:17.589 sys 0m0.571s 00:18:17.589 11:32:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:17.589 11:32:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.589 11:32:22 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:17.589 11:32:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:17.589 11:32:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:17.589 11:32:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:17.589 ************************************ 00:18:17.589 START TEST xnvme_bdevperf 00:18:17.589 ************************************ 00:18:17.589 11:32:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:17.589 11:32:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:17.589 11:32:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:18:17.589 11:32:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:17.589 11:32:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:17.589 11:32:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:17.589 11:32:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:17.589 11:32:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:17.589 { 00:18:17.589 "subsystems": [ 00:18:17.589 { 00:18:17.589 "subsystem": "bdev", 00:18:17.589 "config": [ 00:18:17.589 { 00:18:17.589 "params": { 00:18:17.589 "io_mechanism": "io_uring", 00:18:17.589 "conserve_cpu": false, 00:18:17.589 "filename": "/dev/nvme0n1", 00:18:17.589 "name": "xnvme_bdev" 00:18:17.589 }, 00:18:17.589 "method": "bdev_xnvme_create" 00:18:17.589 }, 00:18:17.589 { 00:18:17.589 "method": "bdev_wait_for_examine" 00:18:17.589 } 00:18:17.589 ] 00:18:17.589 } 00:18:17.589 ] 00:18:17.589 } 00:18:17.589 [2024-11-20 11:32:22.881514] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:18:17.589 [2024-11-20 11:32:22.881920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72104 ] 00:18:17.589 [2024-11-20 11:32:23.075169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.589 [2024-11-20 11:32:23.222261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.848 Running I/O for 5 seconds... 00:18:19.850 43847.00 IOPS, 171.28 MiB/s [2024-11-20T11:32:26.659Z] 42588.50 IOPS, 166.36 MiB/s [2024-11-20T11:32:28.034Z] 42304.67 IOPS, 165.25 MiB/s [2024-11-20T11:32:28.972Z] 43113.00 IOPS, 168.41 MiB/s 00:18:23.210 Latency(us) 00:18:23.210 [2024-11-20T11:32:28.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.210 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:23.210 xnvme_bdev : 5.00 44049.46 172.07 0.00 0.00 1448.62 351.09 10735.42 00:18:23.210 [2024-11-20T11:32:28.972Z] =================================================================================================================== 00:18:23.210 [2024-11-20T11:32:28.972Z] Total : 44049.46 172.07 0.00 0.00 1448.62 351.09 10735.42 00:18:24.587 11:32:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:24.588 11:32:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:24.588 11:32:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:24.588 11:32:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:24.588 11:32:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:24.588 { 00:18:24.588 "subsystems": [ 00:18:24.588 { 00:18:24.588 "subsystem": "bdev", 00:18:24.588 "config": [ 00:18:24.588 { 00:18:24.588 "params": { 00:18:24.588 "io_mechanism": "io_uring", 00:18:24.588 "conserve_cpu": false, 00:18:24.588 "filename": "/dev/nvme0n1", 00:18:24.588 "name": "xnvme_bdev" 00:18:24.588 }, 00:18:24.588 "method": "bdev_xnvme_create" 00:18:24.588 }, 00:18:24.588 { 00:18:24.588 "method": "bdev_wait_for_examine" 00:18:24.588 } 00:18:24.588 ] 00:18:24.588 } 00:18:24.588 ] 00:18:24.588 } 00:18:24.588 [2024-11-20 11:32:30.017464] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:18:24.588 [2024-11-20 11:32:30.017632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72190 ] 00:18:24.588 [2024-11-20 11:32:30.199614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.588 [2024-11-20 11:32:30.336679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.156 Running I/O for 5 seconds... 00:18:27.025 43556.00 IOPS, 170.14 MiB/s [2024-11-20T11:32:34.162Z] 44599.00 IOPS, 174.21 MiB/s [2024-11-20T11:32:35.094Z] 42959.67 IOPS, 167.81 MiB/s [2024-11-20T11:32:36.030Z] 42020.75 IOPS, 164.14 MiB/s 00:18:30.268 Latency(us) 00:18:30.268 [2024-11-20T11:32:36.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.268 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:30.268 xnvme_bdev : 5.00 41692.06 162.86 0.00 0.00 1530.26 108.25 54426.09 00:18:30.268 [2024-11-20T11:32:36.030Z] =================================================================================================================== 00:18:30.268 [2024-11-20T11:32:36.030Z] Total : 41692.06 162.86 0.00 0.00 1530.26 108.25 54426.09 00:18:31.203 00:18:31.203 real 0m14.197s 00:18:31.203 user 0m6.888s 00:18:31.203 sys 0m7.081s 00:18:31.203 11:32:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.203 ************************************ 00:18:31.203 END TEST xnvme_bdevperf 00:18:31.203 ************************************ 00:18:31.203 11:32:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:31.462 11:32:36 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:31.462 11:32:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:31.462 11:32:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.462 11:32:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:31.462 ************************************ 00:18:31.462 START TEST xnvme_fio_plugin 00:18:31.462 ************************************ 00:18:31.462 11:32:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:31.462 11:32:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:31.462 11:32:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:18:31.462 11:32:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:31.462 11:32:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:31.462 11:32:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:31.462 11:32:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:31.462 11:32:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:31.462 11:32:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:31.462 11:32:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:31.462 11:32:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:31.462 11:32:37 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:31.462 11:32:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:31.462 11:32:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:31.462 11:32:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:31.462 11:32:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:31.462 11:32:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:31.462 11:32:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:31.462 11:32:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:31.462 11:32:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:31.462 11:32:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:31.462 11:32:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:31.462 11:32:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:31.462 11:32:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:31.462 { 00:18:31.462 "subsystems": [ 00:18:31.462 { 00:18:31.462 "subsystem": "bdev", 00:18:31.462 "config": [ 00:18:31.462 { 00:18:31.462 "params": { 00:18:31.462 "io_mechanism": "io_uring", 00:18:31.462 "conserve_cpu": false, 00:18:31.462 "filename": "/dev/nvme0n1", 00:18:31.462 "name": "xnvme_bdev" 00:18:31.462 }, 00:18:31.462 "method": "bdev_xnvme_create" 00:18:31.462 }, 00:18:31.462 { 00:18:31.462 "method": "bdev_wait_for_examine" 00:18:31.462 } 00:18:31.462 ] 00:18:31.462 } 00:18:31.462 ] 00:18:31.462 } 00:18:31.720 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:31.720 fio-3.35 00:18:31.720 Starting 1 thread 00:18:38.286 00:18:38.286 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72320: Wed Nov 20 11:32:43 2024 00:18:38.286 read: IOPS=45.2k, BW=177MiB/s (185MB/s)(884MiB/5002msec) 00:18:38.286 slat (nsec): min=2709, max=79311, avg=4164.45, stdev=1960.79 00:18:38.286 clat (usec): min=168, max=6039, avg=1244.61, stdev=254.87 00:18:38.286 lat (usec): min=172, max=6045, avg=1248.78, stdev=255.58 00:18:38.286 clat percentiles (usec): 00:18:38.286 | 1.00th=[ 873], 5.00th=[ 963], 10.00th=[ 1012], 20.00th=[ 1074], 00:18:38.286 | 30.00th=[ 1106], 40.00th=[ 1156], 50.00th=[ 1205], 60.00th=[ 1237], 00:18:38.286 | 70.00th=[ 1303], 80.00th=[ 1369], 90.00th=[ 1532], 95.00th=[ 1729], 00:18:38.286 | 99.00th=[ 2040], 99.50th=[ 2245], 99.90th=[ 2900], 99.95th=[ 3884], 00:18:38.286 | 99.99th=[ 5932] 00:18:38.286 bw ( KiB/s): min=166192, max=196096, per=100.00%, avg=181932.44, stdev=8699.14, samples=9 00:18:38.286 iops : min=41548, max=49024, avg=45483.11, stdev=2174.79, samples=9 00:18:38.286 lat (usec) : 250=0.01%, 500=0.01%, 750=0.03%, 1000=8.53% 00:18:38.286 lat (msec) : 2=90.15%, 4=1.23%, 10=0.04% 00:18:38.286 cpu : usr=33.91%, sys=64.99%, ctx=16, majf=0, minf=762 00:18:38.286 IO depths : 1=1.4%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.2%, >=64=1.6% 00:18:38.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.286 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:18:38.286 issued rwts: total=226278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.286 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:38.286 00:18:38.286 Run status group 0 (all jobs): 00:18:38.286 READ: bw=177MiB/s (185MB/s), 177MiB/s-177MiB/s (185MB/s-185MB/s), io=884MiB (927MB), run=5002-5002msec 00:18:39.229 ----------------------------------------------------- 00:18:39.229 Suppressions used: 00:18:39.229 count bytes template 00:18:39.229 1 11 /usr/src/fio/parse.c 00:18:39.229 1 8 libtcmalloc_minimal.so 00:18:39.229 1 904 libcrypto.so 00:18:39.229 ----------------------------------------------------- 00:18:39.229 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:39.229 11:32:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:39.229 { 00:18:39.229 "subsystems": [ 00:18:39.229 { 00:18:39.229 "subsystem": "bdev", 00:18:39.229 "config": [ 00:18:39.229 { 00:18:39.229 "params": { 00:18:39.229 "io_mechanism": "io_uring", 00:18:39.229 "conserve_cpu": false, 00:18:39.229 "filename": "/dev/nvme0n1", 00:18:39.229 "name": "xnvme_bdev" 00:18:39.229 }, 00:18:39.229 "method": "bdev_xnvme_create" 00:18:39.229 }, 00:18:39.229 { 00:18:39.229 "method": "bdev_wait_for_examine" 00:18:39.229 } 00:18:39.229 ] 00:18:39.229 } 00:18:39.229 ] 00:18:39.229 } 00:18:39.487 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:39.487 fio-3.35 00:18:39.487 Starting 1 thread 00:18:46.137 00:18:46.137 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72413: Wed Nov 20 11:32:50 2024 00:18:46.137 write: IOPS=45.3k, BW=177MiB/s (185MB/s)(884MiB/5001msec); 0 zone resets 00:18:46.137 slat (usec): min=2, max=254, avg= 4.50, stdev= 2.28 00:18:46.137 clat (usec): min=779, max=5104, avg=1231.64, stdev=258.45 00:18:46.137 lat (usec): min=782, max=5108, avg=1236.13, stdev=259.43 00:18:46.137 clat percentiles (usec): 00:18:46.137 | 1.00th=[ 865], 5.00th=[ 930], 10.00th=[ 979], 20.00th=[ 1037], 00:18:46.137 | 30.00th=[ 1090], 40.00th=[ 1139], 50.00th=[ 1172], 60.00th=[ 1237], 00:18:46.137 | 70.00th=[ 1287], 80.00th=[ 1385], 90.00th=[ 1565], 95.00th=[ 1745], 00:18:46.137 | 99.00th=[ 2073], 99.50th=[ 2212], 99.90th=[ 2638], 99.95th=[ 3425], 00:18:46.137 | 99.99th=[ 5014] 00:18:46.137 bw ( KiB/s): min=161280, max=201216, per=100.00%, avg=181845.33, stdev=12779.50, samples=9 00:18:46.137 iops : min=40320, max=50304, avg=45461.33, stdev=3194.88, samples=9 00:18:46.137 lat (usec) : 1000=13.62% 00:18:46.137 lat (msec) : 2=84.97%, 4=1.38%, 10=0.03% 00:18:46.137 cpu : usr=35.62%, sys=63.28%, ctx=13, majf=0, minf=762 00:18:46.137 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:46.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.137 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:18:46.137 issued rwts: total=0,226400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.137 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:46.137 00:18:46.137 Run status group 0 (all jobs): 00:18:46.137 WRITE: bw=177MiB/s (185MB/s), 177MiB/s-177MiB/s (185MB/s-185MB/s), io=884MiB (927MB), run=5001-5001msec 00:18:47.085 ----------------------------------------------------- 00:18:47.085 Suppressions used: 00:18:47.085 count bytes template 00:18:47.085 1 11 /usr/src/fio/parse.c 00:18:47.085 1 8 libtcmalloc_minimal.so 00:18:47.085 1 904 libcrypto.so 00:18:47.085 ----------------------------------------------------- 00:18:47.085 00:18:47.085 00:18:47.085 real 0m15.722s 00:18:47.085 user 0m8.144s 00:18:47.085 sys 0m7.172s 00:18:47.085 11:32:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:47.085 11:32:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:47.085 ************************************ 00:18:47.085 END TEST xnvme_fio_plugin 00:18:47.085 ************************************ 00:18:47.085 11:32:52 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:47.085 11:32:52 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:18:47.085 11:32:52 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:18:47.085 11:32:52 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:47.085 11:32:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:47.085 11:32:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:47.085 11:32:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:47.085 ************************************ 00:18:47.085 START TEST xnvme_rpc 00:18:47.085 ************************************ 00:18:47.085 11:32:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:47.085 11:32:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:47.085 11:32:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:47.085 11:32:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:47.085 11:32:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:47.085 11:32:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72506 00:18:47.085 11:32:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:47.085 11:32:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72506 00:18:47.085 11:32:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72506 ']' 00:18:47.085 11:32:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.085 11:32:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.085 11:32:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.085 11:32:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.085 11:32:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:47.344 [2024-11-20 11:32:52.938598] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:18:47.344 [2024-11-20 11:32:52.939031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72506 ] 00:18:47.601 [2024-11-20 11:32:53.142209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.601 [2024-11-20 11:32:53.307875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:48.980 xnvme_bdev 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72506 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72506 ']' 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72506 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72506 00:18:48.980 killing process with pid 72506 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72506' 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72506 00:18:48.980 11:32:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72506 00:18:52.266 00:18:52.266 real 0m4.607s 00:18:52.266 user 0m4.743s 00:18:52.267 sys 0m0.582s 00:18:52.267 11:32:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.267 ************************************ 00:18:52.267 11:32:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:52.267 END TEST xnvme_rpc 00:18:52.267 ************************************ 00:18:52.267 11:32:57 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:52.267 11:32:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:52.267 11:32:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.267 11:32:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:52.267 ************************************ 00:18:52.267 START TEST xnvme_bdevperf 00:18:52.267 ************************************ 00:18:52.267 11:32:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:52.267 11:32:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:52.267 11:32:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:18:52.267 11:32:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:52.267 11:32:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:52.267 11:32:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:52.267 11:32:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:52.267 11:32:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:52.267 { 00:18:52.267 "subsystems": [ 00:18:52.267 { 00:18:52.267 "subsystem": "bdev", 00:18:52.267 "config": [ 00:18:52.267 { 00:18:52.267 "params": { 00:18:52.267 "io_mechanism": "io_uring", 00:18:52.267 "conserve_cpu": true, 00:18:52.267 "filename": "/dev/nvme0n1", 00:18:52.267 "name": "xnvme_bdev" 00:18:52.267 }, 00:18:52.267 "method": "bdev_xnvme_create" 00:18:52.267 }, 00:18:52.267 { 00:18:52.267 "method": "bdev_wait_for_examine" 00:18:52.267 } 00:18:52.267 ] 00:18:52.267 } 00:18:52.267 ] 00:18:52.267 } 00:18:52.267 [2024-11-20 11:32:57.543912] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:18:52.267 [2024-11-20 11:32:57.544458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72601 ] 00:18:52.267 [2024-11-20 11:32:57.717392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.267 [2024-11-20 11:32:57.848576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.527 Running I/O for 5 seconds... 00:18:54.857 47834.00 IOPS, 186.85 MiB/s [2024-11-20T11:33:01.554Z] 44365.00 IOPS, 173.30 MiB/s [2024-11-20T11:33:02.490Z] 45303.00 IOPS, 176.96 MiB/s [2024-11-20T11:33:03.426Z] 47000.00 IOPS, 183.59 MiB/s 00:18:57.664 Latency(us) 00:18:57.664 [2024-11-20T11:33:03.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.664 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:57.664 xnvme_bdev : 5.00 47021.01 183.68 0.00 0.00 1357.08 147.26 8862.96 00:18:57.664 [2024-11-20T11:33:03.426Z] =================================================================================================================== 00:18:57.664 [2024-11-20T11:33:03.426Z] Total : 47021.01 183.68 0.00 0.00 1357.08 147.26 8862.96 00:18:59.043 11:33:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:59.043 11:33:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:59.043 11:33:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:59.043 11:33:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:59.043 11:33:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:59.043 { 00:18:59.043 "subsystems": [ 00:18:59.043 { 00:18:59.043 "subsystem": "bdev", 00:18:59.043 "config": [ 00:18:59.043 { 00:18:59.043 "params": { 00:18:59.043 "io_mechanism": "io_uring", 00:18:59.043 "conserve_cpu": true, 00:18:59.043 "filename": "/dev/nvme0n1", 00:18:59.043 "name": "xnvme_bdev" 00:18:59.043 }, 00:18:59.043 "method": "bdev_xnvme_create" 00:18:59.043 }, 00:18:59.043 { 00:18:59.043 "method": "bdev_wait_for_examine" 00:18:59.043 } 00:18:59.043 ] 00:18:59.043 } 00:18:59.043 ] 00:18:59.043 } 00:18:59.043 [2024-11-20 11:33:04.661027] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:18:59.043 [2024-11-20 11:33:04.661244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72676 ] 00:18:59.302 [2024-11-20 11:33:04.856887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.302 [2024-11-20 11:33:04.990695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.869 Running I/O for 5 seconds... 00:19:01.739 41408.00 IOPS, 161.75 MiB/s [2024-11-20T11:33:08.437Z] 43089.00 IOPS, 168.32 MiB/s [2024-11-20T11:33:09.810Z] 42763.33 IOPS, 167.04 MiB/s [2024-11-20T11:33:10.745Z] 42703.25 IOPS, 166.81 MiB/s 00:19:04.983 Latency(us) 00:19:04.983 [2024-11-20T11:33:10.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.983 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:04.983 xnvme_bdev : 5.00 43250.45 168.95 0.00 0.00 1474.82 413.50 7084.13 00:19:04.983 [2024-11-20T11:33:10.745Z] =================================================================================================================== 00:19:04.983 [2024-11-20T11:33:10.745Z] Total : 43250.45 168.95 0.00 0.00 1474.82 413.50 7084.13 00:19:06.359 ************************************ 00:19:06.359 END TEST xnvme_bdevperf 00:19:06.359 ************************************ 00:19:06.359 00:19:06.359 real 0m14.508s 00:19:06.359 user 0m8.290s 00:19:06.359 sys 0m5.665s 00:19:06.359 11:33:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.359 11:33:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:06.359 11:33:12 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:06.359 11:33:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:06.359 11:33:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.359 11:33:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:06.359 ************************************ 00:19:06.359 START TEST xnvme_fio_plugin 00:19:06.359 ************************************ 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:06.359 11:33:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:06.359 { 00:19:06.359 "subsystems": [ 00:19:06.359 { 00:19:06.359 "subsystem": "bdev", 00:19:06.359 "config": [ 00:19:06.359 { 00:19:06.359 "params": { 00:19:06.359 "io_mechanism": "io_uring", 00:19:06.359 "conserve_cpu": true, 00:19:06.359 "filename": "/dev/nvme0n1", 00:19:06.359 "name": "xnvme_bdev" 00:19:06.359 }, 00:19:06.359 "method": "bdev_xnvme_create" 00:19:06.359 }, 00:19:06.359 { 00:19:06.359 "method": "bdev_wait_for_examine" 00:19:06.359 } 00:19:06.359 ] 00:19:06.359 } 00:19:06.359 ] 00:19:06.359 } 00:19:06.618 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:06.618 fio-3.35 00:19:06.618 Starting 1 thread 00:19:13.221 00:19:13.221 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72812: Wed Nov 20 11:33:18 2024 00:19:13.221 read: IOPS=47.7k, BW=186MiB/s (195MB/s)(932MiB/5001msec) 00:19:13.221 slat (nsec): min=2682, max=70296, avg=3802.08, stdev=1423.51 00:19:13.221 clat (usec): min=331, max=3438, avg=1186.37, stdev=205.18 00:19:13.221 lat (usec): min=335, max=3443, avg=1190.17, stdev=205.68 00:19:13.221 clat percentiles (usec): 00:19:13.221 | 1.00th=[ 865], 5.00th=[ 922], 10.00th=[ 971], 20.00th=[ 1029], 00:19:13.221 | 30.00th=[ 1074], 40.00th=[ 1123], 50.00th=[ 1156], 60.00th=[ 1205], 00:19:13.221 | 70.00th=[ 1254], 80.00th=[ 1303], 90.00th=[ 1418], 95.00th=[ 1565], 00:19:13.221 | 99.00th=[ 1876], 99.50th=[ 1975], 99.90th=[ 2474], 99.95th=[ 2737], 00:19:13.221 | 99.99th=[ 3326] 00:19:13.221 bw ( KiB/s): min=173056, max=217600, per=100.00%, avg=191050.67, stdev=16679.49, samples=9 00:19:13.221 iops : min=43264, max=54400, avg=47762.67, stdev=4169.87, samples=9 00:19:13.221 lat (usec) : 500=0.01%, 1000=14.64% 00:19:13.221 lat (msec) : 2=84.93%, 4=0.42% 00:19:13.221 cpu : usr=42.72%, sys=53.46%, ctx=16, majf=0, minf=762 00:19:13.221 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:13.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.221 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:19:13.221 issued rwts: total=238673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.221 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:13.221 00:19:13.221 Run status group 0 (all jobs): 00:19:13.221 READ: bw=186MiB/s (195MB/s), 186MiB/s-186MiB/s (195MB/s-195MB/s), io=932MiB (978MB), run=5001-5001msec 00:19:14.596 ----------------------------------------------------- 00:19:14.596 Suppressions used: 00:19:14.596 count bytes template 00:19:14.596 1 11 /usr/src/fio/parse.c 00:19:14.596 1 8 libtcmalloc_minimal.so 00:19:14.596 1 904 libcrypto.so 00:19:14.596 ----------------------------------------------------- 00:19:14.596 00:19:14.596 11:33:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:14.596 11:33:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:14.596 11:33:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:14.596 11:33:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:14.596 11:33:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:14.596 11:33:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:14.596 11:33:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:14.596 11:33:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:14.596 11:33:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:14.596 11:33:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:14.596 11:33:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:14.596 11:33:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:14.596 11:33:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:14.596 11:33:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:14.597 11:33:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:14.597 11:33:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:14.597 11:33:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:14.597 11:33:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:14.597 11:33:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:14.597 11:33:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:14.597 11:33:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:14.597 { 00:19:14.597 "subsystems": [ 00:19:14.597 { 00:19:14.597 "subsystem": "bdev", 00:19:14.597 "config": [ 00:19:14.597 { 00:19:14.597 "params": { 00:19:14.597 "io_mechanism": "io_uring", 00:19:14.597 "conserve_cpu": true, 00:19:14.597 "filename": "/dev/nvme0n1", 00:19:14.597 "name": "xnvme_bdev" 00:19:14.597 }, 00:19:14.597 "method": "bdev_xnvme_create" 00:19:14.597 }, 00:19:14.597 { 00:19:14.597 "method": "bdev_wait_for_examine" 00:19:14.597 } 00:19:14.597 ] 00:19:14.597 } 00:19:14.597 ] 00:19:14.597 } 00:19:14.597 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:14.597 fio-3.35 00:19:14.597 Starting 1 thread 00:19:21.190 00:19:21.190 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72904: Wed Nov 20 11:33:26 2024 00:19:21.190 write: IOPS=44.4k, BW=173MiB/s (182MB/s)(867MiB/5001msec); 0 zone resets 00:19:21.190 slat (usec): min=2, max=406, avg= 4.34, stdev= 2.02 00:19:21.190 clat (usec): min=148, max=4951, avg=1269.18, stdev=215.60 00:19:21.190 lat (usec): min=173, max=4957, avg=1273.53, stdev=216.25 00:19:21.190 clat percentiles (usec): 00:19:21.190 | 1.00th=[ 930], 5.00th=[ 1012], 10.00th=[ 1057], 20.00th=[ 1106], 00:19:21.190 | 30.00th=[ 1156], 40.00th=[ 1188], 50.00th=[ 1237], 60.00th=[ 1270], 00:19:21.190 | 70.00th=[ 1319], 80.00th=[ 1385], 90.00th=[ 1516], 95.00th=[ 1696], 00:19:21.190 | 99.00th=[ 2040], 99.50th=[ 2147], 99.90th=[ 2540], 99.95th=[ 2704], 00:19:21.190 | 99.99th=[ 3687] 00:19:21.190 bw ( KiB/s): min=161888, max=189952, per=100.00%, avg=177959.11, stdev=8964.42, samples=9 00:19:21.190 iops : min=40472, max=47488, avg=44489.78, stdev=2241.11, samples=9 00:19:21.190 lat (usec) : 250=0.01%, 500=0.01%, 1000=3.81% 00:19:21.190 lat (msec) : 2=94.97%, 4=1.21%, 10=0.01% 00:19:21.190 cpu : usr=45.58%, sys=50.64%, ctx=12, majf=0, minf=762 00:19:21.190 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:21.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.190 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0% 00:19:21.190 issued rwts: total=0,221836,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.190 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:21.190 00:19:21.190 Run status group 0 (all jobs): 00:19:21.190 WRITE: bw=173MiB/s (182MB/s), 173MiB/s-173MiB/s (182MB/s-182MB/s), io=867MiB (909MB), run=5001-5001msec 00:19:22.125 ----------------------------------------------------- 00:19:22.125 Suppressions used: 00:19:22.125 count bytes template 00:19:22.125 1 11 /usr/src/fio/parse.c 00:19:22.125 1 8 libtcmalloc_minimal.so 00:19:22.125 1 904 libcrypto.so 00:19:22.125 ----------------------------------------------------- 00:19:22.125 00:19:22.125 00:19:22.125 real 0m15.626s 00:19:22.125 user 0m8.824s 00:19:22.125 sys 0m6.114s 00:19:22.125 11:33:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.125 ************************************ 00:19:22.125 END TEST xnvme_fio_plugin 00:19:22.125 ************************************ 00:19:22.125 11:33:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:22.125 11:33:27 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:19:22.125 11:33:27 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:19:22.125 11:33:27 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:19:22.125 11:33:27 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:19:22.125 11:33:27 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:19:22.125 11:33:27 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:22.125 11:33:27 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:19:22.126 11:33:27 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:19:22.126 11:33:27 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:22.126 11:33:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:22.126 11:33:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:22.126 11:33:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:22.126 ************************************ 00:19:22.126 START TEST xnvme_rpc 00:19:22.126 ************************************ 00:19:22.126 11:33:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:22.126 11:33:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:22.126 11:33:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:22.126 11:33:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:22.126 11:33:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:22.126 11:33:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72996 00:19:22.126 11:33:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72996 00:19:22.126 11:33:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72996 ']' 00:19:22.126 11:33:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.126 11:33:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.126 11:33:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.126 11:33:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.126 11:33:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:22.126 11:33:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:22.126 [2024-11-20 11:33:27.849969] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:19:22.126 [2024-11-20 11:33:27.850145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72996 ] 00:19:22.384 [2024-11-20 11:33:28.042996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.643 [2024-11-20 11:33:28.176920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:23.578 xnvme_bdev 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.578 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:23.835 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72996 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72996 ']' 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72996 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72996 00:19:23.836 killing process with pid 72996 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72996' 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72996 00:19:23.836 11:33:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72996 00:19:27.132 00:19:27.132 real 0m4.576s 00:19:27.132 user 0m4.776s 00:19:27.132 sys 0m0.567s 00:19:27.132 11:33:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.132 ************************************ 00:19:27.132 END TEST xnvme_rpc 00:19:27.132 ************************************ 00:19:27.132 11:33:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:27.132 11:33:32 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:27.132 11:33:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:27.132 11:33:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.132 11:33:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:27.132 ************************************ 00:19:27.132 START TEST xnvme_bdevperf 00:19:27.132 ************************************ 00:19:27.132 11:33:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:27.132 11:33:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:27.132 11:33:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:19:27.132 11:33:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:27.132 11:33:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:27.132 11:33:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:27.132 11:33:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:27.132 11:33:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:27.132 { 00:19:27.132 "subsystems": [ 00:19:27.132 { 00:19:27.132 "subsystem": "bdev", 00:19:27.132 "config": [ 00:19:27.132 { 00:19:27.132 "params": { 00:19:27.132 "io_mechanism": "io_uring_cmd", 00:19:27.132 "conserve_cpu": false, 00:19:27.132 "filename": "/dev/ng0n1", 00:19:27.132 "name": "xnvme_bdev" 00:19:27.132 }, 00:19:27.132 "method": "bdev_xnvme_create" 00:19:27.132 }, 00:19:27.132 { 00:19:27.132 "method": "bdev_wait_for_examine" 00:19:27.132 } 00:19:27.132 ] 00:19:27.132 } 00:19:27.132 ] 00:19:27.132 } 00:19:27.132 [2024-11-20 11:33:32.473741] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:19:27.132 [2024-11-20 11:33:32.474145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73087 ] 00:19:27.132 [2024-11-20 11:33:32.671444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.132 [2024-11-20 11:33:32.800297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.699 Running I/O for 5 seconds... 00:19:29.568 52288.00 IOPS, 204.25 MiB/s [2024-11-20T11:33:36.262Z] 51168.00 IOPS, 199.88 MiB/s [2024-11-20T11:33:37.192Z] 51648.00 IOPS, 201.75 MiB/s [2024-11-20T11:33:38.563Z] 51104.00 IOPS, 199.62 MiB/s [2024-11-20T11:33:38.564Z] 51123.20 IOPS, 199.70 MiB/s 00:19:32.802 Latency(us) 00:19:32.802 [2024-11-20T11:33:38.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.802 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:32.802 xnvme_bdev : 5.00 51085.94 199.55 0.00 0.00 1248.90 709.97 5710.99 00:19:32.802 [2024-11-20T11:33:38.564Z] =================================================================================================================== 00:19:32.802 [2024-11-20T11:33:38.564Z] Total : 51085.94 199.55 0.00 0.00 1248.90 709.97 5710.99 00:19:33.737 11:33:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:33.737 11:33:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:33.737 11:33:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:33.737 11:33:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:33.737 11:33:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:33.996 { 00:19:33.996 "subsystems": [ 00:19:33.996 { 00:19:33.996 "subsystem": "bdev", 00:19:33.996 "config": [ 00:19:33.996 { 00:19:33.996 "params": { 00:19:33.996 "io_mechanism": "io_uring_cmd", 00:19:33.996 "conserve_cpu": false, 00:19:33.996 "filename": "/dev/ng0n1", 00:19:33.996 "name": "xnvme_bdev" 00:19:33.996 }, 00:19:33.996 "method": "bdev_xnvme_create" 00:19:33.996 }, 00:19:33.996 { 00:19:33.996 "method": "bdev_wait_for_examine" 00:19:33.996 } 00:19:33.996 ] 00:19:33.996 } 00:19:33.996 ] 00:19:33.996 } 00:19:33.996 [2024-11-20 11:33:39.564246] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:19:33.996 [2024-11-20 11:33:39.564423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73164 ] 00:19:33.996 [2024-11-20 11:33:39.751291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.255 [2024-11-20 11:33:39.899603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.860 Running I/O for 5 seconds... 00:19:36.735 47744.00 IOPS, 186.50 MiB/s [2024-11-20T11:33:43.432Z] 48832.00 IOPS, 190.75 MiB/s [2024-11-20T11:33:44.367Z] 48960.00 IOPS, 191.25 MiB/s [2024-11-20T11:33:45.345Z] 49168.00 IOPS, 192.06 MiB/s [2024-11-20T11:33:45.345Z] 49395.20 IOPS, 192.95 MiB/s 00:19:39.583 Latency(us) 00:19:39.583 [2024-11-20T11:33:45.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.583 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:39.583 xnvme_bdev : 5.01 49357.69 192.80 0.00 0.00 1292.22 823.10 6241.52 00:19:39.583 [2024-11-20T11:33:45.345Z] =================================================================================================================== 00:19:39.583 [2024-11-20T11:33:45.345Z] Total : 49357.69 192.80 0.00 0.00 1292.22 823.10 6241.52 00:19:40.982 11:33:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:40.982 11:33:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:40.982 11:33:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:19:40.982 11:33:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:40.982 11:33:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:40.982 { 00:19:40.982 "subsystems": [ 00:19:40.982 { 00:19:40.982 "subsystem": "bdev", 00:19:40.982 "config": [ 00:19:40.982 { 00:19:40.982 "params": { 00:19:40.982 "io_mechanism": "io_uring_cmd", 00:19:40.982 "conserve_cpu": false, 00:19:40.982 "filename": "/dev/ng0n1", 00:19:40.982 "name": "xnvme_bdev" 00:19:40.982 }, 00:19:40.982 "method": "bdev_xnvme_create" 00:19:40.982 }, 00:19:40.982 { 00:19:40.982 "method": "bdev_wait_for_examine" 00:19:40.982 } 00:19:40.982 ] 00:19:40.982 } 00:19:40.982 ] 00:19:40.982 } 00:19:40.982 [2024-11-20 11:33:46.613057] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:19:40.982 [2024-11-20 11:33:46.613235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73246 ] 00:19:41.241 [2024-11-20 11:33:46.807595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.241 [2024-11-20 11:33:46.934552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.808 Running I/O for 5 seconds... 00:19:43.678 78848.00 IOPS, 308.00 MiB/s [2024-11-20T11:33:50.375Z] 81536.00 IOPS, 318.50 MiB/s [2024-11-20T11:33:51.753Z] 77461.33 IOPS, 302.58 MiB/s [2024-11-20T11:33:52.689Z] 77824.00 IOPS, 304.00 MiB/s [2024-11-20T11:33:52.689Z] 78067.20 IOPS, 304.95 MiB/s 00:19:46.927 Latency(us) 00:19:46.927 [2024-11-20T11:33:52.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.928 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:19:46.928 xnvme_bdev : 5.00 78037.27 304.83 0.00 0.00 816.51 434.96 3323.61 00:19:46.928 [2024-11-20T11:33:52.690Z] =================================================================================================================== 00:19:46.928 [2024-11-20T11:33:52.690Z] Total : 78037.27 304.83 0.00 0.00 816.51 434.96 3323.61 00:19:48.306 11:33:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:48.306 11:33:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:19:48.306 11:33:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:48.306 11:33:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:48.306 11:33:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:48.306 { 00:19:48.306 "subsystems": [ 00:19:48.306 { 00:19:48.306 "subsystem": "bdev", 00:19:48.306 "config": [ 00:19:48.306 { 00:19:48.306 "params": { 00:19:48.306 "io_mechanism": "io_uring_cmd", 00:19:48.306 "conserve_cpu": false, 00:19:48.306 "filename": "/dev/ng0n1", 00:19:48.306 "name": "xnvme_bdev" 00:19:48.306 }, 00:19:48.306 "method": "bdev_xnvme_create" 00:19:48.306 }, 00:19:48.306 { 00:19:48.306 "method": "bdev_wait_for_examine" 00:19:48.306 } 00:19:48.306 ] 00:19:48.306 } 00:19:48.306 ] 00:19:48.306 } 00:19:48.306 [2024-11-20 11:33:53.795659] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:19:48.306 [2024-11-20 11:33:53.795832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73330 ] 00:19:48.306 [2024-11-20 11:33:53.991726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.565 [2024-11-20 11:33:54.115012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.823 Running I/O for 5 seconds... 00:19:50.735 42383.00 IOPS, 165.56 MiB/s [2024-11-20T11:33:57.870Z] 41849.50 IOPS, 163.47 MiB/s [2024-11-20T11:33:58.805Z] 42247.00 IOPS, 165.03 MiB/s [2024-11-20T11:33:59.744Z] 42578.00 IOPS, 166.32 MiB/s [2024-11-20T11:33:59.744Z] 42765.20 IOPS, 167.05 MiB/s 00:19:53.982 Latency(us) 00:19:53.982 [2024-11-20T11:33:59.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.982 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:19:53.982 xnvme_bdev : 5.00 42737.87 166.94 0.00 0.00 1493.11 93.14 28835.84 00:19:53.982 [2024-11-20T11:33:59.744Z] =================================================================================================================== 00:19:53.982 [2024-11-20T11:33:59.744Z] Total : 42737.87 166.94 0.00 0.00 1493.11 93.14 28835.84 00:19:55.362 ************************************ 00:19:55.362 END TEST xnvme_bdevperf 00:19:55.362 ************************************ 00:19:55.362 00:19:55.362 real 0m28.572s 00:19:55.362 user 0m15.453s 00:19:55.362 sys 0m12.732s 00:19:55.362 11:34:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:55.362 11:34:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:55.362 11:34:00 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:55.362 11:34:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:55.362 11:34:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:55.362 11:34:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:55.362 ************************************ 00:19:55.362 START TEST xnvme_fio_plugin 00:19:55.362 ************************************ 00:19:55.362 11:34:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:55.362 11:34:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:55.362 11:34:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:19:55.362 11:34:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:55.362 11:34:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:55.362 11:34:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:55.362 11:34:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:55.362 11:34:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:55.362 11:34:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:55.362 11:34:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.362 11:34:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:55.362 11:34:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:55.362 11:34:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:55.362 11:34:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:55.362 11:34:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:55.362 11:34:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:55.362 11:34:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.362 11:34:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:55.362 11:34:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:55.362 11:34:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:55.362 11:34:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:55.362 11:34:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:55.362 11:34:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:55.362 11:34:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:55.362 { 00:19:55.362 "subsystems": [ 00:19:55.362 { 00:19:55.362 "subsystem": "bdev", 00:19:55.362 "config": [ 00:19:55.362 { 00:19:55.362 "params": { 00:19:55.362 "io_mechanism": "io_uring_cmd", 00:19:55.362 "conserve_cpu": false, 00:19:55.362 "filename": "/dev/ng0n1", 00:19:55.362 "name": "xnvme_bdev" 00:19:55.362 }, 00:19:55.362 "method": "bdev_xnvme_create" 00:19:55.362 }, 00:19:55.362 { 00:19:55.362 "method": "bdev_wait_for_examine" 00:19:55.362 } 00:19:55.362 ] 00:19:55.362 } 00:19:55.362 ] 00:19:55.362 } 00:19:55.627 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:55.627 fio-3.35 00:19:55.627 Starting 1 thread 00:20:02.306 00:20:02.306 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73457: Wed Nov 20 11:34:07 2024 00:20:02.306 read: IOPS=46.5k, BW=182MiB/s (191MB/s)(909MiB/5001msec) 00:20:02.306 slat (nsec): min=2717, max=88075, avg=4018.83, stdev=1701.95 00:20:02.306 clat (usec): min=438, max=5644, avg=1212.17, stdev=235.75 00:20:02.306 lat (usec): min=442, max=5652, avg=1216.19, stdev=236.24 00:20:02.306 clat percentiles (usec): 00:20:02.306 | 1.00th=[ 865], 5.00th=[ 938], 10.00th=[ 979], 20.00th=[ 1029], 00:20:02.306 | 30.00th=[ 1074], 40.00th=[ 1123], 50.00th=[ 1172], 60.00th=[ 1221], 00:20:02.306 | 70.00th=[ 1287], 80.00th=[ 1385], 90.00th=[ 1500], 95.00th=[ 1614], 00:20:02.306 | 99.00th=[ 1958], 99.50th=[ 2114], 99.90th=[ 2606], 99.95th=[ 2835], 00:20:02.306 | 99.99th=[ 5538] 00:20:02.306 bw ( KiB/s): min=161280, max=204288, per=99.62%, avg=185400.89, stdev=15541.21, samples=9 00:20:02.306 iops : min=40320, max=51072, avg=46350.22, stdev=3885.30, samples=9 00:20:02.306 lat (usec) : 500=0.01%, 750=0.02%, 1000=14.26% 00:20:02.306 lat (msec) : 2=84.88%, 4=0.81%, 10=0.03% 00:20:02.306 cpu : usr=35.32%, sys=63.68%, ctx=89, majf=0, minf=762 00:20:02.306 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:02.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.306 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:20:02.306 issued rwts: total=232669,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.306 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:02.306 00:20:02.306 Run status group 0 (all jobs): 00:20:02.306 READ: bw=182MiB/s (191MB/s), 182MiB/s-182MiB/s (191MB/s-191MB/s), io=909MiB (953MB), run=5001-5001msec 00:20:03.243 ----------------------------------------------------- 00:20:03.243 Suppressions used: 00:20:03.243 count bytes template 00:20:03.243 1 11 /usr/src/fio/parse.c 00:20:03.243 1 8 libtcmalloc_minimal.so 00:20:03.243 1 904 libcrypto.so 00:20:03.243 ----------------------------------------------------- 00:20:03.243 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:03.243 11:34:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:03.243 { 00:20:03.243 "subsystems": [ 00:20:03.243 { 00:20:03.243 "subsystem": "bdev", 00:20:03.243 "config": [ 00:20:03.243 { 00:20:03.243 "params": { 00:20:03.243 "io_mechanism": "io_uring_cmd", 00:20:03.243 "conserve_cpu": false, 00:20:03.243 "filename": "/dev/ng0n1", 00:20:03.243 "name": "xnvme_bdev" 00:20:03.243 }, 00:20:03.243 "method": "bdev_xnvme_create" 00:20:03.243 }, 00:20:03.243 { 00:20:03.243 "method": "bdev_wait_for_examine" 00:20:03.243 } 00:20:03.243 ] 00:20:03.243 } 00:20:03.243 ] 00:20:03.243 } 00:20:03.502 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:03.502 fio-3.35 00:20:03.502 Starting 1 thread 00:20:10.076 00:20:10.076 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73555: Wed Nov 20 11:34:14 2024 00:20:10.076 write: IOPS=42.0k, BW=164MiB/s (172MB/s)(820MiB/5001msec); 0 zone resets 00:20:10.076 slat (nsec): min=2751, max=74685, avg=4756.30, stdev=2107.69 00:20:10.076 clat (usec): min=82, max=11122, avg=1344.84, stdev=423.08 00:20:10.076 lat (usec): min=86, max=11130, avg=1349.60, stdev=423.49 00:20:10.076 clat percentiles (usec): 00:20:10.076 | 1.00th=[ 545], 5.00th=[ 922], 10.00th=[ 1012], 20.00th=[ 1090], 00:20:10.076 | 30.00th=[ 1156], 40.00th=[ 1205], 50.00th=[ 1270], 60.00th=[ 1336], 00:20:10.076 | 70.00th=[ 1418], 80.00th=[ 1532], 90.00th=[ 1778], 95.00th=[ 2057], 00:20:10.076 | 99.00th=[ 2737], 99.50th=[ 3097], 99.90th=[ 4686], 99.95th=[ 5014], 00:20:10.076 | 99.99th=[10552] 00:20:10.076 bw ( KiB/s): min=151392, max=176128, per=100.00%, avg=168024.00, stdev=7452.94, samples=9 00:20:10.076 iops : min=37848, max=44032, avg=42006.00, stdev=1863.24, samples=9 00:20:10.076 lat (usec) : 100=0.01%, 250=0.07%, 500=0.72%, 750=1.85%, 1000=6.49% 00:20:10.076 lat (msec) : 2=85.13%, 4=5.52%, 10=0.20%, 20=0.02% 00:20:10.076 cpu : usr=36.40%, sys=62.44%, ctx=10, majf=0, minf=762 00:20:10.076 IO depths : 1=1.3%, 2=2.6%, 4=5.3%, 8=10.8%, 16=22.5%, 32=55.4%, >=64=2.0% 00:20:10.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.076 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.4%, >=64=0.0% 00:20:10.076 issued rwts: total=0,210021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:10.076 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:10.076 00:20:10.076 Run status group 0 (all jobs): 00:20:10.076 WRITE: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=820MiB (860MB), run=5001-5001msec 00:20:11.018 ----------------------------------------------------- 00:20:11.018 Suppressions used: 00:20:11.018 count bytes template 00:20:11.018 1 11 /usr/src/fio/parse.c 00:20:11.018 1 8 libtcmalloc_minimal.so 00:20:11.018 1 904 libcrypto.so 00:20:11.018 ----------------------------------------------------- 00:20:11.018 00:20:11.018 00:20:11.018 real 0m15.519s 00:20:11.018 user 0m8.054s 00:20:11.018 sys 0m7.109s 00:20:11.018 11:34:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:11.018 11:34:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:11.018 ************************************ 00:20:11.018 END TEST xnvme_fio_plugin 00:20:11.018 ************************************ 00:20:11.018 11:34:16 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:11.018 11:34:16 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:20:11.018 11:34:16 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:20:11.018 11:34:16 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:11.018 11:34:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:11.018 11:34:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:11.018 11:34:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:11.018 ************************************ 00:20:11.018 START TEST xnvme_rpc 00:20:11.018 ************************************ 00:20:11.018 11:34:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:11.018 11:34:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:11.018 11:34:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:11.018 11:34:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:11.019 11:34:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:11.019 11:34:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73646 00:20:11.019 11:34:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:11.019 11:34:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73646 00:20:11.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.019 11:34:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73646 ']' 00:20:11.019 11:34:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.019 11:34:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.019 11:34:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.019 11:34:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.019 11:34:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:11.019 [2024-11-20 11:34:16.689972] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:20:11.019 [2024-11-20 11:34:16.690154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73646 ] 00:20:11.285 [2024-11-20 11:34:16.876721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.542 [2024-11-20 11:34:17.050808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.478 xnvme_bdev 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.478 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.737 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.737 11:34:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:20:12.737 11:34:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:12.737 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.737 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:12.737 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.737 11:34:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73646 00:20:12.737 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73646 ']' 00:20:12.737 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73646 00:20:12.737 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:12.737 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.737 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73646 00:20:12.737 killing process with pid 73646 00:20:12.737 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:12.737 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:12.737 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73646' 00:20:12.737 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73646 00:20:12.737 11:34:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73646 00:20:15.281 00:20:15.281 real 0m4.437s 00:20:15.281 user 0m4.700s 00:20:15.281 sys 0m0.601s 00:20:15.281 11:34:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:15.281 ************************************ 00:20:15.281 END TEST xnvme_rpc 00:20:15.281 ************************************ 00:20:15.281 11:34:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:15.281 11:34:21 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:15.281 11:34:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:15.281 11:34:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:15.281 11:34:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:15.540 ************************************ 00:20:15.541 START TEST xnvme_bdevperf 00:20:15.541 ************************************ 00:20:15.541 11:34:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:20:15.541 11:34:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:20:15.541 11:34:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:20:15.541 11:34:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:15.541 11:34:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:20:15.541 11:34:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:15.541 11:34:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:15.541 11:34:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:15.541 { 00:20:15.541 "subsystems": [ 00:20:15.541 { 00:20:15.541 "subsystem": "bdev", 00:20:15.541 "config": [ 00:20:15.541 { 00:20:15.541 "params": { 00:20:15.541 "io_mechanism": "io_uring_cmd", 00:20:15.541 "conserve_cpu": true, 00:20:15.541 "filename": "/dev/ng0n1", 00:20:15.541 "name": "xnvme_bdev" 00:20:15.541 }, 00:20:15.541 "method": "bdev_xnvme_create" 00:20:15.541 }, 00:20:15.541 { 00:20:15.541 "method": "bdev_wait_for_examine" 00:20:15.541 } 00:20:15.541 ] 00:20:15.541 } 00:20:15.541 ] 00:20:15.541 } 00:20:15.541 [2024-11-20 11:34:21.166175] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:20:15.541 [2024-11-20 11:34:21.166590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73730 ] 00:20:15.799 [2024-11-20 11:34:21.372781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.799 [2024-11-20 11:34:21.549272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.366 Running I/O for 5 seconds... 00:20:18.269 45504.00 IOPS, 177.75 MiB/s [2024-11-20T11:34:25.406Z] 46112.00 IOPS, 180.12 MiB/s [2024-11-20T11:34:26.343Z] 47742.00 IOPS, 186.49 MiB/s [2024-11-20T11:34:27.279Z] 48542.25 IOPS, 189.62 MiB/s 00:20:21.517 Latency(us) 00:20:21.517 [2024-11-20T11:34:27.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.517 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:21.517 xnvme_bdev : 5.00 48952.26 191.22 0.00 0.00 1303.40 329.63 3947.76 00:20:21.517 [2024-11-20T11:34:27.279Z] =================================================================================================================== 00:20:21.517 [2024-11-20T11:34:27.279Z] Total : 48952.26 191.22 0.00 0.00 1303.40 329.63 3947.76 00:20:22.892 11:34:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:22.892 11:34:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:22.892 11:34:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:22.892 11:34:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:22.892 11:34:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:22.892 { 00:20:22.892 "subsystems": [ 00:20:22.892 { 00:20:22.892 "subsystem": "bdev", 00:20:22.892 "config": [ 00:20:22.892 { 00:20:22.892 "params": { 00:20:22.892 "io_mechanism": "io_uring_cmd", 00:20:22.892 "conserve_cpu": true, 00:20:22.892 "filename": "/dev/ng0n1", 00:20:22.892 "name": "xnvme_bdev" 00:20:22.892 }, 00:20:22.892 "method": "bdev_xnvme_create" 00:20:22.892 }, 00:20:22.892 { 00:20:22.892 "method": "bdev_wait_for_examine" 00:20:22.892 } 00:20:22.892 ] 00:20:22.892 } 00:20:22.892 ] 00:20:22.892 } 00:20:22.892 [2024-11-20 11:34:28.405030] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:20:22.892 [2024-11-20 11:34:28.405211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73811 ] 00:20:22.892 [2024-11-20 11:34:28.600438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.150 [2024-11-20 11:34:28.731296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.410 Running I/O for 5 seconds... 00:20:25.716 43392.00 IOPS, 169.50 MiB/s [2024-11-20T11:34:32.416Z] 39680.00 IOPS, 155.00 MiB/s [2024-11-20T11:34:33.354Z] 40064.00 IOPS, 156.50 MiB/s [2024-11-20T11:34:34.289Z] 41680.00 IOPS, 162.81 MiB/s [2024-11-20T11:34:34.289Z] 41566.40 IOPS, 162.37 MiB/s 00:20:28.527 Latency(us) 00:20:28.527 [2024-11-20T11:34:34.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.527 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:20:28.527 xnvme_bdev : 5.01 41512.01 162.16 0.00 0.00 1537.45 565.64 103858.96 00:20:28.527 [2024-11-20T11:34:34.289Z] =================================================================================================================== 00:20:28.527 [2024-11-20T11:34:34.289Z] Total : 41512.01 162.16 0.00 0.00 1537.45 565.64 103858.96 00:20:29.904 11:34:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:29.904 11:34:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:20:29.904 11:34:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:29.904 11:34:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:29.904 11:34:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:29.904 { 00:20:29.904 "subsystems": [ 00:20:29.904 { 00:20:29.904 "subsystem": "bdev", 00:20:29.904 "config": [ 00:20:29.904 { 00:20:29.904 "params": { 00:20:29.904 "io_mechanism": "io_uring_cmd", 00:20:29.904 "conserve_cpu": true, 00:20:29.904 "filename": "/dev/ng0n1", 00:20:29.904 "name": "xnvme_bdev" 00:20:29.904 }, 00:20:29.904 "method": "bdev_xnvme_create" 00:20:29.904 }, 00:20:29.904 { 00:20:29.904 "method": "bdev_wait_for_examine" 00:20:29.904 } 00:20:29.904 ] 00:20:29.904 } 00:20:29.904 ] 00:20:29.904 } 00:20:29.904 [2024-11-20 11:34:35.513081] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:20:29.904 [2024-11-20 11:34:35.513281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73891 ] 00:20:30.162 [2024-11-20 11:34:35.712522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.162 [2024-11-20 11:34:35.888374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.730 Running I/O for 5 seconds... 00:20:32.619 74176.00 IOPS, 289.75 MiB/s [2024-11-20T11:34:39.757Z] 77952.00 IOPS, 304.50 MiB/s [2024-11-20T11:34:40.692Z] 80533.33 IOPS, 314.58 MiB/s [2024-11-20T11:34:41.627Z] 80176.00 IOPS, 313.19 MiB/s 00:20:35.865 Latency(us) 00:20:35.865 [2024-11-20T11:34:41.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.865 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:20:35.865 xnvme_bdev : 5.00 77407.57 302.37 0.00 0.00 823.11 411.55 4962.01 00:20:35.865 [2024-11-20T11:34:41.627Z] =================================================================================================================== 00:20:35.865 [2024-11-20T11:34:41.627Z] Total : 77407.57 302.37 0.00 0.00 823.11 411.55 4962.01 00:20:37.240 11:34:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:37.240 11:34:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:20:37.240 11:34:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:37.240 11:34:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:37.240 11:34:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:37.240 { 00:20:37.240 "subsystems": [ 00:20:37.240 { 00:20:37.240 "subsystem": "bdev", 00:20:37.240 "config": [ 00:20:37.240 { 00:20:37.240 "params": { 00:20:37.240 "io_mechanism": "io_uring_cmd", 00:20:37.240 "conserve_cpu": true, 00:20:37.240 "filename": "/dev/ng0n1", 00:20:37.240 "name": "xnvme_bdev" 00:20:37.240 }, 00:20:37.240 "method": "bdev_xnvme_create" 00:20:37.240 }, 00:20:37.240 { 00:20:37.240 "method": "bdev_wait_for_examine" 00:20:37.240 } 00:20:37.240 ] 00:20:37.240 } 00:20:37.240 ] 00:20:37.240 } 00:20:37.240 [2024-11-20 11:34:42.793812] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:20:37.240 [2024-11-20 11:34:42.794239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73971 ] 00:20:37.240 [2024-11-20 11:34:42.984206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.497 [2024-11-20 11:34:43.130611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.062 Running I/O for 5 seconds... 00:20:39.961 39604.00 IOPS, 154.70 MiB/s [2024-11-20T11:34:46.652Z] 32411.50 IOPS, 126.61 MiB/s [2024-11-20T11:34:47.585Z] 31533.67 IOPS, 123.18 MiB/s [2024-11-20T11:34:48.960Z] 34840.75 IOPS, 136.10 MiB/s 00:20:43.198 Latency(us) 00:20:43.198 [2024-11-20T11:34:48.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.198 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:20:43.198 xnvme_bdev : 5.00 36296.43 141.78 0.00 0.00 1756.80 69.73 19348.72 00:20:43.198 [2024-11-20T11:34:48.960Z] =================================================================================================================== 00:20:43.198 [2024-11-20T11:34:48.960Z] Total : 36296.43 141.78 0.00 0.00 1756.80 69.73 19348.72 00:20:44.132 00:20:44.132 real 0m28.774s 00:20:44.132 user 0m17.824s 00:20:44.132 sys 0m9.139s 00:20:44.132 11:34:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.132 11:34:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:44.132 ************************************ 00:20:44.132 END TEST xnvme_bdevperf 00:20:44.132 ************************************ 00:20:44.132 11:34:49 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:20:44.132 11:34:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:44.132 11:34:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:44.132 11:34:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:44.132 ************************************ 00:20:44.132 START TEST xnvme_fio_plugin 00:20:44.132 ************************************ 00:20:44.132 11:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:20:44.132 11:34:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:20:44.133 11:34:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:20:44.133 11:34:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:44.133 11:34:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:44.133 11:34:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:44.133 11:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:44.133 11:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:44.133 11:34:49 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:44.133 11:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:44.133 11:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:44.133 11:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:44.133 11:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:44.133 11:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:44.133 11:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:44.133 11:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:44.133 11:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:44.133 11:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:44.133 11:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:44.391 11:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:44.391 11:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:44.391 11:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:44.391 11:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:44.391 11:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:44.391 { 00:20:44.391 "subsystems": [ 00:20:44.391 { 00:20:44.391 "subsystem": "bdev", 00:20:44.391 "config": [ 00:20:44.391 { 00:20:44.391 "params": { 00:20:44.391 "io_mechanism": "io_uring_cmd", 00:20:44.391 "conserve_cpu": true, 00:20:44.391 "filename": "/dev/ng0n1", 00:20:44.391 "name": "xnvme_bdev" 00:20:44.391 }, 00:20:44.391 "method": "bdev_xnvme_create" 00:20:44.391 }, 00:20:44.391 { 00:20:44.391 "method": "bdev_wait_for_examine" 00:20:44.391 } 00:20:44.391 ] 00:20:44.391 } 00:20:44.391 ] 00:20:44.391 } 00:20:44.649 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:44.649 fio-3.35 00:20:44.649 Starting 1 thread 00:20:51.211 00:20:51.211 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74095: Wed Nov 20 11:34:56 2024 00:20:51.211 read: IOPS=46.2k, BW=180MiB/s (189MB/s)(902MiB/5001msec) 00:20:51.211 slat (nsec): min=2683, max=62306, avg=4164.31, stdev=1883.12 00:20:51.211 clat (usec): min=571, max=7032, avg=1213.91, stdev=256.75 00:20:51.211 lat (usec): min=576, max=7038, avg=1218.07, stdev=257.41 00:20:51.211 clat percentiles (usec): 00:20:51.211 | 1.00th=[ 857], 5.00th=[ 914], 10.00th=[ 963], 20.00th=[ 1020], 00:20:51.211 | 30.00th=[ 1074], 40.00th=[ 1123], 50.00th=[ 1172], 60.00th=[ 1221], 00:20:51.211 | 70.00th=[ 1287], 80.00th=[ 1385], 90.00th=[ 1516], 95.00th=[ 1647], 00:20:51.211 | 99.00th=[ 1958], 99.50th=[ 2180], 99.90th=[ 2835], 99.95th=[ 3195], 00:20:51.211 | 99.99th=[ 6915] 00:20:51.211 bw ( KiB/s): min=160256, max=198144, per=98.06%, avg=181134.22, stdev=10848.41, samples=9 00:20:51.211 iops : min=40064, max=49536, avg=45283.56, stdev=2712.10, samples=9 00:20:51.211 lat (usec) : 750=0.02%, 1000=16.60% 00:20:51.211 lat (msec) : 2=82.51%, 4=0.84%, 10=0.03% 00:20:51.211 cpu : usr=51.66%, sys=45.46%, ctx=47, majf=0, minf=762 00:20:51.211 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:51.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.211 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:20:51.211 issued rwts: total=230946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.211 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:51.211 00:20:51.211 Run status group 0 (all jobs): 00:20:51.211 READ: bw=180MiB/s (189MB/s), 180MiB/s-180MiB/s (189MB/s-189MB/s), io=902MiB (946MB), run=5001-5001msec 00:20:52.147 ----------------------------------------------------- 00:20:52.147 Suppressions used: 00:20:52.147 count bytes template 00:20:52.147 1 11 /usr/src/fio/parse.c 00:20:52.147 1 8 libtcmalloc_minimal.so 00:20:52.147 1 904 libcrypto.so 00:20:52.147 ----------------------------------------------------- 00:20:52.147 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:52.147 11:34:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:52.147 { 00:20:52.147 "subsystems": [ 00:20:52.147 { 00:20:52.147 "subsystem": "bdev", 00:20:52.147 "config": [ 00:20:52.147 { 00:20:52.147 "params": { 00:20:52.147 "io_mechanism": "io_uring_cmd", 00:20:52.147 "conserve_cpu": true, 00:20:52.147 "filename": "/dev/ng0n1", 00:20:52.147 "name": "xnvme_bdev" 00:20:52.147 }, 00:20:52.147 "method": "bdev_xnvme_create" 00:20:52.147 }, 00:20:52.147 { 00:20:52.147 "method": "bdev_wait_for_examine" 00:20:52.147 } 00:20:52.147 ] 00:20:52.147 } 00:20:52.147 ] 00:20:52.147 } 00:20:52.147 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:52.147 fio-3.35 00:20:52.147 Starting 1 thread 00:20:58.708 00:20:58.708 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74192: Wed Nov 20 11:35:03 2024 00:20:58.708 write: IOPS=42.5k, BW=166MiB/s (174MB/s)(829MiB/5001msec); 0 zone resets 00:20:58.708 slat (usec): min=2, max=469, avg= 5.05, stdev= 3.15 00:20:58.708 clat (usec): min=628, max=3957, avg=1303.49, stdev=260.57 00:20:58.708 lat (usec): min=633, max=3966, avg=1308.55, stdev=261.61 00:20:58.708 clat percentiles (usec): 00:20:58.708 | 1.00th=[ 906], 5.00th=[ 988], 10.00th=[ 1037], 20.00th=[ 1106], 00:20:58.708 | 30.00th=[ 1156], 40.00th=[ 1205], 50.00th=[ 1254], 60.00th=[ 1303], 00:20:58.708 | 70.00th=[ 1385], 80.00th=[ 1483], 90.00th=[ 1647], 95.00th=[ 1778], 00:20:58.708 | 99.00th=[ 2114], 99.50th=[ 2343], 99.90th=[ 2868], 99.95th=[ 2999], 00:20:58.708 | 99.99th=[ 3851] 00:20:58.708 bw ( KiB/s): min=162304, max=181760, per=100.00%, avg=170531.22, stdev=6565.77, samples=9 00:20:58.708 iops : min=40576, max=45440, avg=42632.78, stdev=1641.45, samples=9 00:20:58.708 lat (usec) : 750=0.03%, 1000=6.07% 00:20:58.708 lat (msec) : 2=92.31%, 4=1.58% 00:20:58.708 cpu : usr=45.42%, sys=50.96%, ctx=25, majf=0, minf=762 00:20:58.708 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.2%, >=64=1.6% 00:20:58.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.708 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:20:58.708 issued rwts: total=0,212338,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.708 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:58.708 00:20:58.708 Run status group 0 (all jobs): 00:20:58.708 WRITE: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=829MiB (870MB), run=5001-5001msec 00:21:00.085 ----------------------------------------------------- 00:21:00.085 Suppressions used: 00:21:00.085 count bytes template 00:21:00.085 1 11 /usr/src/fio/parse.c 00:21:00.085 1 8 libtcmalloc_minimal.so 00:21:00.085 1 904 libcrypto.so 00:21:00.085 ----------------------------------------------------- 00:21:00.085 00:21:00.085 00:21:00.085 real 0m15.650s 00:21:00.085 user 0m9.401s 00:21:00.085 sys 0m5.647s 00:21:00.085 11:35:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.085 11:35:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:00.085 ************************************ 00:21:00.085 END TEST xnvme_fio_plugin 00:21:00.085 ************************************ 00:21:00.085 11:35:05 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73646 00:21:00.085 11:35:05 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73646 ']' 00:21:00.085 11:35:05 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73646 00:21:00.085 Process with pid 73646 is not found 00:21:00.085 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73646) - No such process 00:21:00.085 11:35:05 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73646 is not found' 00:21:00.085 00:21:00.085 real 4m5.103s 00:21:00.085 user 2m18.619s 00:21:00.085 sys 1m30.227s 00:21:00.085 11:35:05 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.085 ************************************ 00:21:00.085 11:35:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:00.085 END TEST nvme_xnvme 00:21:00.085 ************************************ 00:21:00.085 11:35:05 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:21:00.085 11:35:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:00.085 11:35:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.085 11:35:05 -- common/autotest_common.sh@10 -- # set +x 00:21:00.085 ************************************ 00:21:00.085 START TEST blockdev_xnvme 00:21:00.085 ************************************ 00:21:00.085 11:35:05 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:21:00.085 * Looking for test storage... 00:21:00.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:21:00.085 11:35:05 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:00.085 11:35:05 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:21:00.085 11:35:05 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:00.085 11:35:05 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:21:00.085 11:35:05 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:21:00.345 11:35:05 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:21:00.345 11:35:05 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:00.346 11:35:05 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:21:00.346 11:35:05 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:21:00.346 11:35:05 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:00.346 11:35:05 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:00.346 11:35:05 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:21:00.346 11:35:05 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:00.346 11:35:05 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:00.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.346 --rc genhtml_branch_coverage=1 00:21:00.346 --rc genhtml_function_coverage=1 00:21:00.346 --rc genhtml_legend=1 00:21:00.346 --rc geninfo_all_blocks=1 00:21:00.346 --rc geninfo_unexecuted_blocks=1 00:21:00.346 00:21:00.346 ' 00:21:00.346 11:35:05 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:00.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.346 --rc genhtml_branch_coverage=1 00:21:00.346 --rc genhtml_function_coverage=1 00:21:00.346 --rc genhtml_legend=1 00:21:00.346 --rc geninfo_all_blocks=1 00:21:00.346 --rc geninfo_unexecuted_blocks=1 00:21:00.346 00:21:00.346 ' 00:21:00.346 11:35:05 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:00.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.346 --rc genhtml_branch_coverage=1 00:21:00.346 --rc genhtml_function_coverage=1 00:21:00.346 --rc genhtml_legend=1 00:21:00.346 --rc geninfo_all_blocks=1 00:21:00.346 --rc geninfo_unexecuted_blocks=1 00:21:00.346 00:21:00.346 ' 00:21:00.346 11:35:05 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:00.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.346 --rc genhtml_branch_coverage=1 00:21:00.346 --rc genhtml_function_coverage=1 00:21:00.346 --rc genhtml_legend=1 00:21:00.346 --rc geninfo_all_blocks=1 00:21:00.346 --rc geninfo_unexecuted_blocks=1 00:21:00.346 00:21:00.346 ' 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74332 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:00.346 11:35:05 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 74332 00:21:00.346 11:35:05 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 74332 ']' 00:21:00.346 11:35:05 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.346 11:35:05 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.346 11:35:05 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.346 11:35:05 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.346 11:35:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:00.346 [2024-11-20 11:35:06.028961] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:00.346 [2024-11-20 11:35:06.029182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74332 ] 00:21:00.604 [2024-11-20 11:35:06.258241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.863 [2024-11-20 11:35:06.399216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.801 11:35:07 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.801 11:35:07 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:21:01.801 11:35:07 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:21:01.801 11:35:07 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:21:01.801 11:35:07 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:21:01.801 11:35:07 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:21:01.801 11:35:07 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:02.737 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:03.305 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:03.305 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:03.305 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:21:03.305 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:21:03.305 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:21:03.305 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:21:03.305 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:21:03.305 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:03.306 nvme0n1 00:21:03.306 nvme0n2 00:21:03.306 nvme0n3 00:21:03.306 nvme1n1 00:21:03.306 nvme2n1 00:21:03.306 nvme3n1 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:03.306 11:35:08 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.306 11:35:08 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:21:03.306 11:35:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.306 11:35:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:03.306 11:35:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.306 11:35:09 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:21:03.306 11:35:09 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:21:03.306 11:35:09 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:21:03.306 11:35:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.306 11:35:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:03.565 11:35:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.565 11:35:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:21:03.566 11:35:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "1e2e165e-dcc7-4c8e-adf7-ced22f76a56e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1e2e165e-dcc7-4c8e-adf7-ced22f76a56e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "33c601e9-98ef-4cac-8cf6-c46cc9d872e5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "33c601e9-98ef-4cac-8cf6-c46cc9d872e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "00f4425a-a5bf-4274-8668-b79b99676a7c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "00f4425a-a5bf-4274-8668-b79b99676a7c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "a72ebc44-b195-4d0f-b1c9-3f08739b5bb8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a72ebc44-b195-4d0f-b1c9-3f08739b5bb8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "829b5935-2714-4ff0-949e-24ee3beedfae"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "829b5935-2714-4ff0-949e-24ee3beedfae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "f8c49e2c-8c66-494f-9e1a-99c70583f386"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f8c49e2c-8c66-494f-9e1a-99c70583f386",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:21:03.566 11:35:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:21:03.566 11:35:09 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:21:03.566 11:35:09 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:21:03.566 11:35:09 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:21:03.566 11:35:09 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 74332 00:21:03.566 11:35:09 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 74332 ']' 00:21:03.566 11:35:09 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 74332 00:21:03.566 11:35:09 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:21:03.566 11:35:09 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:03.566 11:35:09 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74332 00:21:03.566 11:35:09 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:03.566 killing process with pid 74332 00:21:03.566 11:35:09 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:03.566 11:35:09 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74332' 00:21:03.566 11:35:09 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 74332 00:21:03.566 11:35:09 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 74332 00:21:06.096 11:35:11 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:06.096 11:35:11 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:21:06.096 11:35:11 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:06.096 11:35:11 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:06.096 11:35:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:06.355 ************************************ 00:21:06.355 START TEST bdev_hello_world 00:21:06.355 ************************************ 00:21:06.355 11:35:11 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:21:06.355 [2024-11-20 11:35:11.984248] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:06.355 [2024-11-20 11:35:11.984444] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74634 ] 00:21:06.613 [2024-11-20 11:35:12.189213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.613 [2024-11-20 11:35:12.368329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.180 [2024-11-20 11:35:12.866769] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:21:07.180 [2024-11-20 11:35:12.866824] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:21:07.180 [2024-11-20 11:35:12.866843] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:21:07.180 [2024-11-20 11:35:12.869147] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:21:07.180 [2024-11-20 11:35:12.869708] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:21:07.180 [2024-11-20 11:35:12.869761] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:21:07.180 [2024-11-20 11:35:12.869922] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:21:07.180 00:21:07.180 [2024-11-20 11:35:12.869948] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:21:08.556 00:21:08.556 real 0m2.220s 00:21:08.556 user 0m1.823s 00:21:08.556 sys 0m0.276s 00:21:08.556 11:35:14 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:08.556 11:35:14 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:21:08.556 ************************************ 00:21:08.556 END TEST bdev_hello_world 00:21:08.556 ************************************ 00:21:08.556 11:35:14 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:21:08.556 11:35:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:08.556 11:35:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:08.556 11:35:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:08.556 ************************************ 00:21:08.556 START TEST bdev_bounds 00:21:08.556 ************************************ 00:21:08.556 11:35:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:21:08.556 11:35:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74676 00:21:08.556 11:35:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:21:08.556 Process bdevio pid: 74676 00:21:08.556 11:35:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74676' 00:21:08.556 11:35:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74676 00:21:08.556 11:35:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:08.556 11:35:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74676 ']' 00:21:08.556 11:35:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.556 11:35:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.556 11:35:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.556 11:35:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.556 11:35:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:08.556 [2024-11-20 11:35:14.263684] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:08.556 [2024-11-20 11:35:14.263871] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74676 ] 00:21:08.822 [2024-11-20 11:35:14.464694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:09.090 [2024-11-20 11:35:14.648151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.090 [2024-11-20 11:35:14.648312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.090 [2024-11-20 11:35:14.648326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:09.656 11:35:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.656 11:35:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:21:09.656 11:35:15 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:21:09.915 I/O targets: 00:21:09.915 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:09.915 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:09.915 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:09.915 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:21:09.915 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:21:09.915 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:21:09.915 00:21:09.915 00:21:09.915 CUnit - A unit testing framework for C - Version 2.1-3 00:21:09.915 http://cunit.sourceforge.net/ 00:21:09.915 00:21:09.915 00:21:09.915 Suite: bdevio tests on: nvme3n1 00:21:09.915 Test: blockdev write read block ...passed 00:21:09.915 Test: blockdev write zeroes read block ...passed 00:21:09.915 Test: blockdev write zeroes read no split ...passed 00:21:09.915 Test: blockdev write zeroes read split ...passed 00:21:09.915 Test: blockdev write zeroes read split partial ...passed 00:21:09.915 Test: blockdev reset ...passed 00:21:09.915 Test: blockdev write read 8 blocks ...passed 00:21:09.915 Test: blockdev write read size > 128k ...passed 00:21:09.915 Test: blockdev write read invalid size ...passed 00:21:09.915 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:09.915 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:09.915 Test: blockdev write read max offset ...passed 00:21:09.915 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:09.915 Test: blockdev writev readv 8 blocks ...passed 00:21:09.915 Test: blockdev writev readv 30 x 1block ...passed 00:21:09.915 Test: blockdev writev readv block ...passed 00:21:09.915 Test: blockdev writev readv size > 128k ...passed 00:21:09.915 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:09.915 Test: blockdev comparev and writev ...passed 00:21:09.915 Test: blockdev nvme passthru rw ...passed 00:21:09.915 Test: blockdev nvme passthru vendor specific ...passed 00:21:09.915 Test: blockdev nvme admin passthru ...passed 00:21:09.915 Test: blockdev copy ...passed 00:21:09.915 Suite: bdevio tests on: nvme2n1 00:21:09.915 Test: blockdev write read block ...passed 00:21:09.915 Test: blockdev write zeroes read block ...passed 00:21:09.915 Test: blockdev write zeroes read no split ...passed 00:21:09.915 Test: blockdev write zeroes read split ...passed 00:21:09.915 Test: blockdev write zeroes read split partial ...passed 00:21:09.915 Test: blockdev reset ...passed 00:21:09.915 Test: blockdev write read 8 blocks ...passed 00:21:09.915 Test: blockdev write read size > 128k ...passed 00:21:09.915 Test: blockdev write read invalid size ...passed 00:21:09.915 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:09.915 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:09.915 Test: blockdev write read max offset ...passed 00:21:09.915 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:09.915 Test: blockdev writev readv 8 blocks ...passed 00:21:09.915 Test: blockdev writev readv 30 x 1block ...passed 00:21:09.915 Test: blockdev writev readv block ...passed 00:21:09.915 Test: blockdev writev readv size > 128k ...passed 00:21:09.915 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:09.915 Test: blockdev comparev and writev ...passed 00:21:09.915 Test: blockdev nvme passthru rw ...passed 00:21:09.915 Test: blockdev nvme passthru vendor specific ...passed 00:21:09.915 Test: blockdev nvme admin passthru ...passed 00:21:09.915 Test: blockdev copy ...passed 00:21:09.915 Suite: bdevio tests on: nvme1n1 00:21:09.915 Test: blockdev write read block ...passed 00:21:09.915 Test: blockdev write zeroes read block ...passed 00:21:10.175 Test: blockdev write zeroes read no split ...passed 00:21:10.175 Test: blockdev write zeroes read split ...passed 00:21:10.175 Test: blockdev write zeroes read split partial ...passed 00:21:10.175 Test: blockdev reset ...passed 00:21:10.175 Test: blockdev write read 8 blocks ...passed 00:21:10.175 Test: blockdev write read size > 128k ...passed 00:21:10.175 Test: blockdev write read invalid size ...passed 00:21:10.175 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:10.175 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:10.175 Test: blockdev write read max offset ...passed 00:21:10.175 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:10.175 Test: blockdev writev readv 8 blocks ...passed 00:21:10.175 Test: blockdev writev readv 30 x 1block ...passed 00:21:10.175 Test: blockdev writev readv block ...passed 00:21:10.175 Test: blockdev writev readv size > 128k ...passed 00:21:10.175 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:10.175 Test: blockdev comparev and writev ...passed 00:21:10.175 Test: blockdev nvme passthru rw ...passed 00:21:10.175 Test: blockdev nvme passthru vendor specific ...passed 00:21:10.175 Test: blockdev nvme admin passthru ...passed 00:21:10.175 Test: blockdev copy ...passed 00:21:10.175 Suite: bdevio tests on: nvme0n3 00:21:10.175 Test: blockdev write read block ...passed 00:21:10.175 Test: blockdev write zeroes read block ...passed 00:21:10.175 Test: blockdev write zeroes read no split ...passed 00:21:10.175 Test: blockdev write zeroes read split ...passed 00:21:10.175 Test: blockdev write zeroes read split partial ...passed 00:21:10.175 Test: blockdev reset ...passed 00:21:10.175 Test: blockdev write read 8 blocks ...passed 00:21:10.175 Test: blockdev write read size > 128k ...passed 00:21:10.175 Test: blockdev write read invalid size ...passed 00:21:10.175 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:10.175 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:10.175 Test: blockdev write read max offset ...passed 00:21:10.175 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:10.175 Test: blockdev writev readv 8 blocks ...passed 00:21:10.175 Test: blockdev writev readv 30 x 1block ...passed 00:21:10.175 Test: blockdev writev readv block ...passed 00:21:10.175 Test: blockdev writev readv size > 128k ...passed 00:21:10.175 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:10.175 Test: blockdev comparev and writev ...passed 00:21:10.175 Test: blockdev nvme passthru rw ...passed 00:21:10.175 Test: blockdev nvme passthru vendor specific ...passed 00:21:10.175 Test: blockdev nvme admin passthru ...passed 00:21:10.175 Test: blockdev copy ...passed 00:21:10.175 Suite: bdevio tests on: nvme0n2 00:21:10.175 Test: blockdev write read block ...passed 00:21:10.175 Test: blockdev write zeroes read block ...passed 00:21:10.175 Test: blockdev write zeroes read no split ...passed 00:21:10.175 Test: blockdev write zeroes read split ...passed 00:21:10.433 Test: blockdev write zeroes read split partial ...passed 00:21:10.433 Test: blockdev reset ...passed 00:21:10.433 Test: blockdev write read 8 blocks ...passed 00:21:10.433 Test: blockdev write read size > 128k ...passed 00:21:10.433 Test: blockdev write read invalid size ...passed 00:21:10.433 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:10.433 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:10.433 Test: blockdev write read max offset ...passed 00:21:10.433 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:10.433 Test: blockdev writev readv 8 blocks ...passed 00:21:10.433 Test: blockdev writev readv 30 x 1block ...passed 00:21:10.433 Test: blockdev writev readv block ...passed 00:21:10.433 Test: blockdev writev readv size > 128k ...passed 00:21:10.433 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:10.433 Test: blockdev comparev and writev ...passed 00:21:10.433 Test: blockdev nvme passthru rw ...passed 00:21:10.433 Test: blockdev nvme passthru vendor specific ...passed 00:21:10.433 Test: blockdev nvme admin passthru ...passed 00:21:10.433 Test: blockdev copy ...passed 00:21:10.433 Suite: bdevio tests on: nvme0n1 00:21:10.433 Test: blockdev write read block ...passed 00:21:10.433 Test: blockdev write zeroes read block ...passed 00:21:10.433 Test: blockdev write zeroes read no split ...passed 00:21:10.433 Test: blockdev write zeroes read split ...passed 00:21:10.433 Test: blockdev write zeroes read split partial ...passed 00:21:10.433 Test: blockdev reset ...passed 00:21:10.433 Test: blockdev write read 8 blocks ...passed 00:21:10.433 Test: blockdev write read size > 128k ...passed 00:21:10.433 Test: blockdev write read invalid size ...passed 00:21:10.433 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:10.433 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:10.433 Test: blockdev write read max offset ...passed 00:21:10.433 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:10.433 Test: blockdev writev readv 8 blocks ...passed 00:21:10.433 Test: blockdev writev readv 30 x 1block ...passed 00:21:10.433 Test: blockdev writev readv block ...passed 00:21:10.433 Test: blockdev writev readv size > 128k ...passed 00:21:10.433 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:10.433 Test: blockdev comparev and writev ...passed 00:21:10.433 Test: blockdev nvme passthru rw ...passed 00:21:10.433 Test: blockdev nvme passthru vendor specific ...passed 00:21:10.433 Test: blockdev nvme admin passthru ...passed 00:21:10.433 Test: blockdev copy ...passed 00:21:10.433 00:21:10.433 Run Summary: Type Total Ran Passed Failed Inactive 00:21:10.433 suites 6 6 n/a 0 0 00:21:10.433 tests 138 138 138 0 0 00:21:10.433 asserts 780 780 780 0 n/a 00:21:10.433 00:21:10.433 Elapsed time = 1.817 seconds 00:21:10.433 0 00:21:10.433 11:35:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74676 00:21:10.433 11:35:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74676 ']' 00:21:10.433 11:35:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74676 00:21:10.433 11:35:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:21:10.433 11:35:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.433 11:35:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74676 00:21:10.433 killing process with pid 74676 00:21:10.434 11:35:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:10.434 11:35:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:10.434 11:35:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74676' 00:21:10.434 11:35:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74676 00:21:10.434 11:35:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74676 00:21:11.808 ************************************ 00:21:11.808 END TEST bdev_bounds 00:21:11.808 ************************************ 00:21:11.808 11:35:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:21:11.808 00:21:11.808 real 0m3.269s 00:21:11.808 user 0m8.158s 00:21:11.808 sys 0m0.466s 00:21:11.808 11:35:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:11.808 11:35:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:11.808 11:35:17 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:21:11.808 11:35:17 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:11.808 11:35:17 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:11.808 11:35:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:11.808 ************************************ 00:21:11.808 START TEST bdev_nbd 00:21:11.808 ************************************ 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74748 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74748 /var/tmp/spdk-nbd.sock 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74748 ']' 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.808 11:35:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:12.067 [2024-11-20 11:35:17.610505] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:12.067 [2024-11-20 11:35:17.611660] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.067 [2024-11-20 11:35:17.817453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.325 [2024-11-20 11:35:17.994887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.259 11:35:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.259 11:35:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:21:13.259 11:35:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:21:13.259 11:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:13.259 11:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:13.259 11:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:21:13.259 11:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:21:13.259 11:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:13.259 11:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:13.259 11:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:21:13.259 11:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:21:13.259 11:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:21:13.259 11:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:21:13.259 11:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:13.259 11:35:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:21:13.518 11:35:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:21:13.518 11:35:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:21:13.518 11:35:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:21:13.518 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:13.518 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:13.518 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:13.518 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:13.518 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:13.518 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:13.518 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:13.518 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:13.518 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:13.518 1+0 records in 00:21:13.518 1+0 records out 00:21:13.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442245 s, 9.3 MB/s 00:21:13.518 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:13.518 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:13.518 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:13.518 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:13.518 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:13.518 11:35:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:13.518 11:35:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:13.518 11:35:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:21:13.776 11:35:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:21:13.776 11:35:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:21:13.776 11:35:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:21:13.776 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:13.776 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:13.776 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:13.776 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:13.776 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:13.776 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:13.776 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:13.776 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:13.776 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:13.776 1+0 records in 00:21:13.776 1+0 records out 00:21:13.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000726414 s, 5.6 MB/s 00:21:13.776 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:13.776 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:13.776 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:13.776 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:13.776 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:13.776 11:35:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:13.776 11:35:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:13.776 11:35:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:21:14.343 11:35:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:21:14.343 11:35:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:21:14.343 11:35:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:21:14.343 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:21:14.343 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:14.343 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:14.343 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:14.343 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:21:14.343 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:14.343 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:14.343 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:14.343 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:14.343 1+0 records in 00:21:14.343 1+0 records out 00:21:14.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536227 s, 7.6 MB/s 00:21:14.343 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:14.343 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:14.343 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:14.343 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:14.343 11:35:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:14.343 11:35:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:14.343 11:35:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:14.343 11:35:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:21:14.602 11:35:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:21:14.602 11:35:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:21:14.602 11:35:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:21:14.602 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:21:14.602 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:14.602 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:14.602 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:14.602 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:21:14.602 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:14.602 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:14.602 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:14.602 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:14.602 1+0 records in 00:21:14.602 1+0 records out 00:21:14.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624592 s, 6.6 MB/s 00:21:14.602 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:14.602 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:14.602 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:14.602 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:14.602 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:14.602 11:35:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:14.602 11:35:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:14.602 11:35:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:21:14.860 11:35:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:21:14.860 11:35:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:21:14.860 11:35:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:21:14.860 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:21:14.860 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:14.860 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:14.860 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:14.860 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:21:14.860 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:14.860 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:14.860 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:14.860 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:14.860 1+0 records in 00:21:14.860 1+0 records out 00:21:14.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000704106 s, 5.8 MB/s 00:21:14.860 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:14.860 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:14.860 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:14.860 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:14.860 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:14.860 11:35:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:14.860 11:35:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:14.860 11:35:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:21:15.119 11:35:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:21:15.119 11:35:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:21:15.119 11:35:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:21:15.119 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:21:15.119 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:15.119 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:15.119 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:15.119 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:21:15.119 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:15.119 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:15.119 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:15.119 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:15.378 1+0 records in 00:21:15.378 1+0 records out 00:21:15.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688415 s, 5.9 MB/s 00:21:15.378 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.378 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:15.378 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.378 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:15.378 11:35:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:15.378 11:35:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:15.378 11:35:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:21:15.378 11:35:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:15.638 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:21:15.638 { 00:21:15.638 "nbd_device": "/dev/nbd0", 00:21:15.638 "bdev_name": "nvme0n1" 00:21:15.638 }, 00:21:15.638 { 00:21:15.638 "nbd_device": "/dev/nbd1", 00:21:15.638 "bdev_name": "nvme0n2" 00:21:15.638 }, 00:21:15.638 { 00:21:15.638 "nbd_device": "/dev/nbd2", 00:21:15.638 "bdev_name": "nvme0n3" 00:21:15.638 }, 00:21:15.638 { 00:21:15.638 "nbd_device": "/dev/nbd3", 00:21:15.638 "bdev_name": "nvme1n1" 00:21:15.638 }, 00:21:15.638 { 00:21:15.638 "nbd_device": "/dev/nbd4", 00:21:15.638 "bdev_name": "nvme2n1" 00:21:15.638 }, 00:21:15.638 { 00:21:15.638 "nbd_device": "/dev/nbd5", 00:21:15.638 "bdev_name": "nvme3n1" 00:21:15.638 } 00:21:15.638 ]' 00:21:15.638 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:21:15.638 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:21:15.638 { 00:21:15.638 "nbd_device": "/dev/nbd0", 00:21:15.638 "bdev_name": "nvme0n1" 00:21:15.638 }, 00:21:15.638 { 00:21:15.638 "nbd_device": "/dev/nbd1", 00:21:15.638 "bdev_name": "nvme0n2" 00:21:15.638 }, 00:21:15.638 { 00:21:15.638 "nbd_device": "/dev/nbd2", 00:21:15.638 "bdev_name": "nvme0n3" 00:21:15.638 }, 00:21:15.638 { 00:21:15.638 "nbd_device": "/dev/nbd3", 00:21:15.638 "bdev_name": "nvme1n1" 00:21:15.638 }, 00:21:15.638 { 00:21:15.638 "nbd_device": "/dev/nbd4", 00:21:15.638 "bdev_name": "nvme2n1" 00:21:15.638 }, 00:21:15.638 { 00:21:15.638 "nbd_device": "/dev/nbd5", 00:21:15.638 "bdev_name": "nvme3n1" 00:21:15.638 } 00:21:15.638 ]' 00:21:15.638 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:21:15.638 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:21:15.638 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:15.638 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:21:15.638 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:15.638 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:15.638 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:15.638 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:15.898 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:15.898 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:15.898 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:15.898 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:15.898 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:15.898 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:15.898 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:15.898 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:15.898 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:15.898 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:16.156 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:16.156 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:16.156 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:16.156 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:16.156 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.156 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:16.156 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:16.156 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:16.156 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:16.156 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:21:16.415 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:21:16.415 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:21:16.415 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:21:16.415 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:16.415 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.415 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:21:16.415 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:16.415 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:16.415 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:16.415 11:35:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:21:16.674 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:21:16.674 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:21:16.674 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:21:16.674 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:16.674 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.674 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:21:16.674 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:16.674 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:16.674 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:16.674 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:21:16.933 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:21:16.933 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:21:16.933 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:21:16.933 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:16.933 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.933 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:21:16.933 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:16.933 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:16.933 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:16.933 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:21:17.191 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:21:17.191 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:21:17.191 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:21:17.191 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:17.191 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:17.191 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:21:17.191 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:17.191 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:17.191 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:17.191 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:17.191 11:35:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:17.450 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:21:17.451 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:17.451 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:17.451 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:21:17.709 /dev/nbd0 00:21:17.967 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:17.967 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:17.967 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:17.967 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:17.967 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:17.967 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:17.967 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:17.967 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:17.967 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:17.967 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:17.967 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:17.967 1+0 records in 00:21:17.967 1+0 records out 00:21:17.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606637 s, 6.8 MB/s 00:21:17.967 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:17.967 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:17.967 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:17.967 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:17.967 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:17.967 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:17.967 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:17.967 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:21:17.967 /dev/nbd1 00:21:18.227 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:18.227 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:18.227 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:18.227 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:18.227 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:18.227 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:18.227 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:18.227 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:18.227 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:18.227 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:18.227 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:18.227 1+0 records in 00:21:18.227 1+0 records out 00:21:18.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467423 s, 8.8 MB/s 00:21:18.227 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.227 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:18.227 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.227 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:18.227 11:35:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:18.227 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:18.227 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:18.227 11:35:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:21:18.491 /dev/nbd10 00:21:18.491 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:21:18.491 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:21:18.491 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:21:18.491 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:18.491 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:18.491 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:18.491 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:21:18.491 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:18.491 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:18.491 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:18.492 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:18.492 1+0 records in 00:21:18.492 1+0 records out 00:21:18.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683125 s, 6.0 MB/s 00:21:18.492 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.492 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:18.492 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.492 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:18.492 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:18.492 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:18.492 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:18.492 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:21:18.755 /dev/nbd11 00:21:18.755 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:21:18.755 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:21:18.755 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:21:18.755 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:18.755 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:18.755 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:18.755 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:21:18.755 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:18.755 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:18.755 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:18.755 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:18.755 1+0 records in 00:21:18.755 1+0 records out 00:21:18.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532229 s, 7.7 MB/s 00:21:18.755 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.755 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:18.755 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.755 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:18.755 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:18.755 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:18.755 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:18.755 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:21:19.014 /dev/nbd12 00:21:19.014 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:21:19.014 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:21:19.014 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:21:19.014 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:19.014 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:19.014 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:19.014 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:21:19.014 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:19.014 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:19.014 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:19.014 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:19.014 1+0 records in 00:21:19.014 1+0 records out 00:21:19.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633316 s, 6.5 MB/s 00:21:19.014 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:19.014 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:19.014 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:19.014 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:19.014 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:19.014 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:19.014 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:19.014 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:21:19.273 /dev/nbd13 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:19.273 1+0 records in 00:21:19.273 1+0 records out 00:21:19.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000792213 s, 5.2 MB/s 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:19.273 11:35:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:19.532 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:19.532 { 00:21:19.532 "nbd_device": "/dev/nbd0", 00:21:19.532 "bdev_name": "nvme0n1" 00:21:19.532 }, 00:21:19.532 { 00:21:19.532 "nbd_device": "/dev/nbd1", 00:21:19.532 "bdev_name": "nvme0n2" 00:21:19.532 }, 00:21:19.532 { 00:21:19.532 "nbd_device": "/dev/nbd10", 00:21:19.532 "bdev_name": "nvme0n3" 00:21:19.532 }, 00:21:19.532 { 00:21:19.532 "nbd_device": "/dev/nbd11", 00:21:19.532 "bdev_name": "nvme1n1" 00:21:19.532 }, 00:21:19.532 { 00:21:19.532 "nbd_device": "/dev/nbd12", 00:21:19.532 "bdev_name": "nvme2n1" 00:21:19.532 }, 00:21:19.532 { 00:21:19.532 "nbd_device": "/dev/nbd13", 00:21:19.532 "bdev_name": "nvme3n1" 00:21:19.532 } 00:21:19.532 ]' 00:21:19.532 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:19.532 { 00:21:19.532 "nbd_device": "/dev/nbd0", 00:21:19.532 "bdev_name": "nvme0n1" 00:21:19.532 }, 00:21:19.532 { 00:21:19.532 "nbd_device": "/dev/nbd1", 00:21:19.532 "bdev_name": "nvme0n2" 00:21:19.532 }, 00:21:19.532 { 00:21:19.532 "nbd_device": "/dev/nbd10", 00:21:19.532 "bdev_name": "nvme0n3" 00:21:19.532 }, 00:21:19.532 { 00:21:19.532 "nbd_device": "/dev/nbd11", 00:21:19.532 "bdev_name": "nvme1n1" 00:21:19.532 }, 00:21:19.532 { 00:21:19.532 "nbd_device": "/dev/nbd12", 00:21:19.532 "bdev_name": "nvme2n1" 00:21:19.532 }, 00:21:19.532 { 00:21:19.532 "nbd_device": "/dev/nbd13", 00:21:19.532 "bdev_name": "nvme3n1" 00:21:19.532 } 00:21:19.532 ]' 00:21:19.532 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:19.791 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:21:19.791 /dev/nbd1 00:21:19.791 /dev/nbd10 00:21:19.791 /dev/nbd11 00:21:19.791 /dev/nbd12 00:21:19.791 /dev/nbd13' 00:21:19.791 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:19.791 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:21:19.791 /dev/nbd1 00:21:19.791 /dev/nbd10 00:21:19.791 /dev/nbd11 00:21:19.791 /dev/nbd12 00:21:19.791 /dev/nbd13' 00:21:19.791 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:21:19.791 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:21:19.791 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:21:19.791 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:21:19.791 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:21:19.791 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:19.791 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:19.791 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:19.791 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:19.791 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:19.791 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:21:19.791 256+0 records in 00:21:19.791 256+0 records out 00:21:19.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00884114 s, 119 MB/s 00:21:19.791 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:19.791 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:19.791 256+0 records in 00:21:19.791 256+0 records out 00:21:19.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117226 s, 8.9 MB/s 00:21:19.791 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:19.791 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:21:20.050 256+0 records in 00:21:20.050 256+0 records out 00:21:20.050 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12681 s, 8.3 MB/s 00:21:20.050 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:20.050 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:21:20.050 256+0 records in 00:21:20.050 256+0 records out 00:21:20.050 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125263 s, 8.4 MB/s 00:21:20.050 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:20.050 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:21:20.308 256+0 records in 00:21:20.308 256+0 records out 00:21:20.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132316 s, 7.9 MB/s 00:21:20.308 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:20.308 11:35:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:21:20.308 256+0 records in 00:21:20.308 256+0 records out 00:21:20.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148393 s, 7.1 MB/s 00:21:20.308 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:20.308 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:21:20.566 256+0 records in 00:21:20.566 256+0 records out 00:21:20.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128511 s, 8.2 MB/s 00:21:20.566 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:21:20.566 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:20.566 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:20.566 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:20.566 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:20.566 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:20.566 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:20.566 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:20.566 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:21:20.566 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:20.566 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:21:20.566 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:20.566 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:21:20.567 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:20.567 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:21:20.567 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:20.567 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:21:20.567 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:20.567 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:21:20.567 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:20.567 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:21:20.567 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:20.567 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:21:20.567 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:20.567 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:20.567 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:20.567 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:20.826 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:20.826 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:20.826 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:20.826 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:20.826 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:20.826 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:20.826 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:20.826 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:20.826 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:20.826 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:21.393 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:21.393 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:21.393 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:21.393 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:21.393 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:21.393 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:21.393 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:21.393 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:21.393 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:21.393 11:35:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:21:21.652 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:21:21.652 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:21:21.652 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:21:21.653 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:21.653 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:21.653 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:21:21.653 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:21.653 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:21.653 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:21.653 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:21:21.911 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:21:21.912 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:21:21.912 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:21:21.912 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:21.912 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:21.912 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:21:21.912 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:21.912 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:21.912 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:21.912 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:21:22.172 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:21:22.172 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:21:22.172 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:21:22.172 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:22.172 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:22.172 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:21:22.172 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:22.172 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:22.172 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:22.172 11:35:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:21:22.431 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:21:22.431 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:21:22.431 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:21:22.431 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:22.431 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:22.431 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:21:22.431 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:22.431 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:22.431 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:22.431 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:22.431 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:22.689 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:22.689 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:22.689 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:22.948 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:22.948 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:22.948 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:22.948 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:22.948 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:22.948 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:22.948 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:21:22.948 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:22.948 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:21:22.948 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:22.948 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:22.948 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:21:22.948 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:21:23.207 malloc_lvol_verify 00:21:23.207 11:35:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:21:23.466 3cdfaa81-0399-4f14-8302-af7971ad8c33 00:21:23.466 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:21:23.725 2bb1b9d2-c587-4796-9ced-b5091250e5ff 00:21:23.725 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:21:23.985 /dev/nbd0 00:21:23.985 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:21:23.985 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:21:23.985 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:21:23.985 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:21:23.985 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:21:23.985 mke2fs 1.47.0 (5-Feb-2023) 00:21:23.985 Discarding device blocks: 0/4096 done 00:21:23.985 Creating filesystem with 4096 1k blocks and 1024 inodes 00:21:23.985 00:21:23.985 Allocating group tables: 0/1 done 00:21:23.985 Writing inode tables: 0/1 done 00:21:23.985 Creating journal (1024 blocks): done 00:21:23.985 Writing superblocks and filesystem accounting information: 0/1 done 00:21:23.985 00:21:23.985 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:23.985 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:23.985 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:23.985 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:23.985 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:23.985 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:23.985 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:24.244 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:24.244 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:24.244 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:24.244 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:24.244 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:24.244 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:24.244 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:24.244 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:24.244 11:35:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74748 00:21:24.244 11:35:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74748 ']' 00:21:24.244 11:35:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74748 00:21:24.244 11:35:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:21:24.244 11:35:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.244 11:35:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74748 00:21:24.502 killing process with pid 74748 00:21:24.502 11:35:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:24.502 11:35:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:24.502 11:35:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74748' 00:21:24.502 11:35:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74748 00:21:24.502 11:35:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74748 00:21:25.879 11:35:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:21:25.879 00:21:25.879 real 0m13.865s 00:21:25.879 user 0m18.704s 00:21:25.879 sys 0m5.634s 00:21:25.879 11:35:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:25.879 11:35:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:25.879 ************************************ 00:21:25.879 END TEST bdev_nbd 00:21:25.879 ************************************ 00:21:25.879 11:35:31 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:21:25.879 11:35:31 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:21:25.879 11:35:31 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:21:25.879 11:35:31 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:21:25.879 11:35:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:25.879 11:35:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.879 11:35:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:25.879 ************************************ 00:21:25.879 START TEST bdev_fio 00:21:25.879 ************************************ 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:21:25.879 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:25.879 ************************************ 00:21:25.879 START TEST bdev_fio_rw_verify 00:21:25.879 ************************************ 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:25.879 11:35:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:26.138 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:26.138 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:26.138 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:26.138 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:26.138 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:26.138 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:26.138 fio-3.35 00:21:26.138 Starting 6 threads 00:21:38.336 00:21:38.336 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=75188: Wed Nov 20 11:35:42 2024 00:21:38.336 read: IOPS=27.1k, BW=106MiB/s (111MB/s)(1059MiB/10001msec) 00:21:38.336 slat (usec): min=2, max=3344, avg= 6.97, stdev= 8.30 00:21:38.336 clat (usec): min=129, max=47465, avg=693.78, stdev=362.47 00:21:38.336 lat (usec): min=136, max=47479, avg=700.75, stdev=363.04 00:21:38.336 clat percentiles (usec): 00:21:38.336 | 50.000th=[ 709], 99.000th=[ 1336], 99.900th=[ 2376], 99.990th=[ 3982], 00:21:38.336 | 99.999th=[47449] 00:21:38.336 write: IOPS=27.4k, BW=107MiB/s (112MB/s)(1071MiB/10001msec); 0 zone resets 00:21:38.336 slat (usec): min=13, max=4784, avg=27.38, stdev=30.38 00:21:38.336 clat (usec): min=94, max=7116, avg=780.54, stdev=275.59 00:21:38.336 lat (usec): min=113, max=7144, avg=807.92, stdev=278.70 00:21:38.336 clat percentiles (usec): 00:21:38.336 | 50.000th=[ 783], 99.000th=[ 1516], 99.900th=[ 2638], 99.990th=[ 4359], 00:21:38.336 | 99.999th=[ 6652] 00:21:38.336 bw ( KiB/s): min=81504, max=137920, per=100.00%, avg=110114.37, stdev=2308.99, samples=114 00:21:38.337 iops : min=20376, max=34480, avg=27528.42, stdev=577.25, samples=114 00:21:38.337 lat (usec) : 100=0.01%, 250=2.44%, 500=16.18%, 750=32.31%, 1000=36.29% 00:21:38.337 lat (msec) : 2=12.55%, 4=0.21%, 10=0.01%, 50=0.01% 00:21:38.337 cpu : usr=59.97%, sys=26.38%, ctx=7109, majf=0, minf=23380 00:21:38.337 IO depths : 1=12.1%, 2=24.6%, 4=50.4%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:38.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.337 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.337 issued rwts: total=271114,274205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.337 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:38.337 00:21:38.337 Run status group 0 (all jobs): 00:21:38.337 READ: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=1059MiB (1110MB), run=10001-10001msec 00:21:38.337 WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=1071MiB (1123MB), run=10001-10001msec 00:21:38.595 ----------------------------------------------------- 00:21:38.595 Suppressions used: 00:21:38.595 count bytes template 00:21:38.595 6 48 /usr/src/fio/parse.c 00:21:38.595 2910 279360 /usr/src/fio/iolog.c 00:21:38.595 1 8 libtcmalloc_minimal.so 00:21:38.595 1 904 libcrypto.so 00:21:38.595 ----------------------------------------------------- 00:21:38.595 00:21:38.595 00:21:38.595 real 0m12.845s 00:21:38.595 user 0m38.268s 00:21:38.595 sys 0m16.281s 00:21:38.595 11:35:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.595 11:35:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:38.595 ************************************ 00:21:38.595 END TEST bdev_fio_rw_verify 00:21:38.595 ************************************ 00:21:38.963 11:35:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:38.963 11:35:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:38.963 11:35:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:38.963 11:35:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:38.963 11:35:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:21:38.963 11:35:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:21:38.963 11:35:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:38.963 11:35:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:38.963 11:35:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:38.963 11:35:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:21:38.963 11:35:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:38.963 11:35:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:38.963 11:35:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:38.963 11:35:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:21:38.963 11:35:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:21:38.963 11:35:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:21:38.963 11:35:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:38.964 11:35:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "1e2e165e-dcc7-4c8e-adf7-ced22f76a56e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1e2e165e-dcc7-4c8e-adf7-ced22f76a56e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "33c601e9-98ef-4cac-8cf6-c46cc9d872e5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "33c601e9-98ef-4cac-8cf6-c46cc9d872e5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "00f4425a-a5bf-4274-8668-b79b99676a7c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "00f4425a-a5bf-4274-8668-b79b99676a7c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "a72ebc44-b195-4d0f-b1c9-3f08739b5bb8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a72ebc44-b195-4d0f-b1c9-3f08739b5bb8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "829b5935-2714-4ff0-949e-24ee3beedfae"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "829b5935-2714-4ff0-949e-24ee3beedfae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "f8c49e2c-8c66-494f-9e1a-99c70583f386"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f8c49e2c-8c66-494f-9e1a-99c70583f386",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:21:38.964 11:35:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:38.964 11:35:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:38.964 11:35:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:38.964 /home/vagrant/spdk_repo/spdk 00:21:38.964 11:35:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:38.964 11:35:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:38.964 00:21:38.964 real 0m13.051s 00:21:38.964 user 0m38.372s 00:21:38.964 sys 0m16.387s 00:21:38.964 11:35:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.964 11:35:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:38.964 ************************************ 00:21:38.964 END TEST bdev_fio 00:21:38.964 ************************************ 00:21:38.964 11:35:44 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:38.964 11:35:44 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:38.964 11:35:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:38.964 11:35:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.964 11:35:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:38.964 ************************************ 00:21:38.964 START TEST bdev_verify 00:21:38.964 ************************************ 00:21:38.964 11:35:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:38.964 [2024-11-20 11:35:44.638800] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:38.964 [2024-11-20 11:35:44.639772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75358 ] 00:21:39.223 [2024-11-20 11:35:44.848406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:39.482 [2024-11-20 11:35:45.030167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.482 [2024-11-20 11:35:45.030186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.049 Running I/O for 5 seconds... 00:21:42.362 20064.00 IOPS, 78.38 MiB/s [2024-11-20T11:35:49.059Z] 19728.00 IOPS, 77.06 MiB/s [2024-11-20T11:35:50.066Z] 19498.67 IOPS, 76.17 MiB/s [2024-11-20T11:35:51.001Z] 20032.00 IOPS, 78.25 MiB/s [2024-11-20T11:35:51.001Z] 19565.20 IOPS, 76.43 MiB/s 00:21:45.239 Latency(us) 00:21:45.239 [2024-11-20T11:35:51.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.239 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:45.239 Verification LBA range: start 0x0 length 0x80000 00:21:45.239 nvme0n1 : 5.07 1412.85 5.52 0.00 0.00 90418.63 14979.66 86382.69 00:21:45.239 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:45.239 Verification LBA range: start 0x80000 length 0x80000 00:21:45.239 nvme0n1 : 5.03 1451.33 5.67 0.00 0.00 88023.81 13419.28 87381.33 00:21:45.239 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:45.239 Verification LBA range: start 0x0 length 0x80000 00:21:45.239 nvme0n2 : 5.06 1416.40 5.53 0.00 0.00 90008.55 11359.57 87880.66 00:21:45.239 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:45.239 Verification LBA range: start 0x80000 length 0x80000 00:21:45.239 nvme0n2 : 5.04 1446.65 5.65 0.00 0.00 88130.04 18100.42 77394.90 00:21:45.239 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:45.239 Verification LBA range: start 0x0 length 0x80000 00:21:45.239 nvme0n3 : 5.08 1412.11 5.52 0.00 0.00 90085.69 17601.10 76895.57 00:21:45.239 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:45.239 Verification LBA range: start 0x80000 length 0x80000 00:21:45.239 nvme0n3 : 5.05 1446.00 5.65 0.00 0.00 88014.49 17725.93 77894.22 00:21:45.239 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:45.239 Verification LBA range: start 0x0 length 0x20000 00:21:45.239 nvme1n1 : 5.08 1410.32 5.51 0.00 0.00 90010.54 17601.10 83886.08 00:21:45.239 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:45.239 Verification LBA range: start 0x20000 length 0x20000 00:21:45.239 nvme1n1 : 5.08 1462.42 5.71 0.00 0.00 86862.85 3198.78 84884.72 00:21:45.239 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:45.239 Verification LBA range: start 0x0 length 0xbd0bd 00:21:45.239 nvme2n1 : 5.09 2455.99 9.59 0.00 0.00 51363.93 5180.46 70404.39 00:21:45.239 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:45.240 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:21:45.240 nvme2n1 : 5.08 2506.51 9.79 0.00 0.00 50465.10 5086.84 73899.64 00:21:45.240 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:45.240 Verification LBA range: start 0x0 length 0xa0000 00:21:45.240 nvme3n1 : 5.09 1408.38 5.50 0.00 0.00 89687.06 8238.81 86382.69 00:21:45.240 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:45.240 Verification LBA range: start 0xa0000 length 0xa0000 00:21:45.240 nvme3n1 : 5.07 1464.88 5.72 0.00 0.00 86257.89 5242.88 88379.98 00:21:45.240 [2024-11-20T11:35:51.002Z] =================================================================================================================== 00:21:45.240 [2024-11-20T11:35:51.002Z] Total : 19293.83 75.37 0.00 0.00 78968.97 3198.78 88379.98 00:21:46.614 00:21:46.614 real 0m7.612s 00:21:46.614 user 0m11.963s 00:21:46.614 sys 0m1.902s 00:21:46.614 11:35:52 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.614 11:35:52 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:46.614 ************************************ 00:21:46.614 END TEST bdev_verify 00:21:46.614 ************************************ 00:21:46.614 11:35:52 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:46.614 11:35:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:46.614 11:35:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.614 11:35:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:46.614 ************************************ 00:21:46.614 START TEST bdev_verify_big_io 00:21:46.614 ************************************ 00:21:46.614 11:35:52 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:46.614 [2024-11-20 11:35:52.277771] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:46.614 [2024-11-20 11:35:52.277923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75463 ] 00:21:46.872 [2024-11-20 11:35:52.459543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:46.873 [2024-11-20 11:35:52.588932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.873 [2024-11-20 11:35:52.588975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.808 Running I/O for 5 seconds... 00:21:54.039 1472.00 IOPS, 92.00 MiB/s [2024-11-20T11:35:59.801Z] 2928.00 IOPS, 183.00 MiB/s 00:21:54.039 Latency(us) 00:21:54.039 [2024-11-20T11:35:59.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.039 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:54.039 Verification LBA range: start 0x0 length 0x8000 00:21:54.039 nvme0n1 : 5.86 113.39 7.09 0.00 0.00 1059291.12 82887.44 1541906.04 00:21:54.039 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:54.039 Verification LBA range: start 0x8000 length 0x8000 00:21:54.039 nvme0n1 : 5.85 84.73 5.30 0.00 0.00 1405508.61 238675.87 2332831.94 00:21:54.039 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:54.039 Verification LBA range: start 0x0 length 0x8000 00:21:54.039 nvme0n2 : 5.86 109.23 6.83 0.00 0.00 1085042.10 183750.46 1270274.93 00:21:54.039 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:54.039 Verification LBA range: start 0x8000 length 0x8000 00:21:54.039 nvme0n2 : 6.18 101.02 6.31 0.00 0.00 1146749.57 90377.26 1653754.15 00:21:54.040 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:54.040 Verification LBA range: start 0x0 length 0x8000 00:21:54.040 nvme0n3 : 6.35 83.11 5.19 0.00 0.00 1322524.99 190740.97 1581851.79 00:21:54.040 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:54.040 Verification LBA range: start 0x8000 length 0x8000 00:21:54.040 nvme0n3 : 6.18 100.99 6.31 0.00 0.00 1092681.97 177758.60 2444680.05 00:21:54.040 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:54.040 Verification LBA range: start 0x0 length 0x2000 00:21:54.040 nvme1n1 : 6.24 117.93 7.37 0.00 0.00 898169.61 8488.47 1102502.77 00:21:54.040 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:54.040 Verification LBA range: start 0x2000 length 0x2000 00:21:54.040 nvme1n1 : 6.33 131.35 8.21 0.00 0.00 821911.76 73899.64 2189027.23 00:21:54.040 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:54.040 Verification LBA range: start 0x0 length 0xbd0b 00:21:54.040 nvme2n1 : 6.33 152.17 9.51 0.00 0.00 670739.41 4712.35 906768.58 00:21:54.040 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:54.040 Verification LBA range: start 0xbd0b length 0xbd0b 00:21:54.040 nvme2n1 : 6.34 164.04 10.25 0.00 0.00 627249.32 29709.65 1022611.26 00:21:54.040 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:54.040 Verification LBA range: start 0x0 length 0xa000 00:21:54.040 nvme3n1 : 6.36 160.99 10.06 0.00 0.00 611688.72 370.59 1206361.72 00:21:54.040 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:54.040 Verification LBA range: start 0xa000 length 0xa000 00:21:54.040 nvme3n1 : 6.35 118.49 7.41 0.00 0.00 833623.80 2559.02 2077179.12 00:21:54.040 [2024-11-20T11:35:59.802Z] =================================================================================================================== 00:21:54.040 [2024-11-20T11:35:59.802Z] Total : 1437.45 89.84 0.00 0.00 906312.46 370.59 2444680.05 00:21:55.943 00:21:55.943 real 0m9.023s 00:21:55.943 user 0m16.511s 00:21:55.943 sys 0m0.543s 00:21:55.943 11:36:01 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:55.943 11:36:01 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:55.943 ************************************ 00:21:55.943 END TEST bdev_verify_big_io 00:21:55.943 ************************************ 00:21:55.943 11:36:01 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:55.943 11:36:01 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:55.943 11:36:01 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:55.943 11:36:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:55.943 ************************************ 00:21:55.943 START TEST bdev_write_zeroes 00:21:55.943 ************************************ 00:21:55.943 11:36:01 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:55.943 [2024-11-20 11:36:01.390870] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:55.943 [2024-11-20 11:36:01.391057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75584 ] 00:21:55.943 [2024-11-20 11:36:01.588806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.201 [2024-11-20 11:36:01.726996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.767 Running I/O for 1 seconds... 00:21:57.701 69248.00 IOPS, 270.50 MiB/s 00:21:57.701 Latency(us) 00:21:57.701 [2024-11-20T11:36:03.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.701 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:57.701 nvme0n1 : 1.02 10773.66 42.08 0.00 0.00 11867.59 6709.64 28461.35 00:21:57.701 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:57.701 nvme0n2 : 1.02 10756.22 42.02 0.00 0.00 11874.69 6772.05 28835.84 00:21:57.701 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:57.701 nvme0n3 : 1.02 10740.60 41.96 0.00 0.00 11879.61 6803.26 29335.16 00:21:57.701 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:57.701 nvme1n1 : 1.03 10724.83 41.89 0.00 0.00 11885.31 6803.26 29709.65 00:21:57.701 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:57.701 nvme2n1 : 1.03 14569.61 56.91 0.00 0.00 8738.29 3464.05 26713.72 00:21:57.701 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:57.701 nvme3n1 : 1.03 10693.76 41.77 0.00 0.00 11831.45 6241.52 27213.04 00:21:57.701 [2024-11-20T11:36:03.463Z] =================================================================================================================== 00:21:57.701 [2024-11-20T11:36:03.463Z] Total : 68258.69 266.64 0.00 0.00 11198.36 3464.05 29709.65 00:21:59.079 00:21:59.079 real 0m3.404s 00:21:59.079 user 0m2.537s 00:21:59.079 sys 0m0.676s 00:21:59.079 11:36:04 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.079 11:36:04 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:59.079 ************************************ 00:21:59.079 END TEST bdev_write_zeroes 00:21:59.079 ************************************ 00:21:59.079 11:36:04 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:59.079 11:36:04 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:59.079 11:36:04 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.079 11:36:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:59.079 ************************************ 00:21:59.079 START TEST bdev_json_nonenclosed 00:21:59.079 ************************************ 00:21:59.079 11:36:04 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:59.079 [2024-11-20 11:36:04.826888] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:59.079 [2024-11-20 11:36:04.827035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75643 ] 00:21:59.338 [2024-11-20 11:36:05.003782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.596 [2024-11-20 11:36:05.140928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.596 [2024-11-20 11:36:05.141071] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:59.596 [2024-11-20 11:36:05.141096] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:59.596 [2024-11-20 11:36:05.141110] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:59.856 00:21:59.856 real 0m0.702s 00:21:59.856 user 0m0.456s 00:21:59.856 sys 0m0.140s 00:21:59.856 11:36:05 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.856 11:36:05 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:59.856 ************************************ 00:21:59.856 END TEST bdev_json_nonenclosed 00:21:59.856 ************************************ 00:21:59.856 11:36:05 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:59.856 11:36:05 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:59.856 11:36:05 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.856 11:36:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:59.856 ************************************ 00:21:59.856 START TEST bdev_json_nonarray 00:21:59.856 ************************************ 00:21:59.856 11:36:05 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:59.856 [2024-11-20 11:36:05.586539] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:21:59.856 [2024-11-20 11:36:05.586708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75669 ] 00:22:00.116 [2024-11-20 11:36:05.765947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.375 [2024-11-20 11:36:05.899572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.375 [2024-11-20 11:36:05.899686] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:22:00.375 [2024-11-20 11:36:05.899711] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:00.375 [2024-11-20 11:36:05.899724] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:00.634 00:22:00.634 real 0m0.703s 00:22:00.634 user 0m0.454s 00:22:00.634 sys 0m0.142s 00:22:00.634 11:36:06 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:00.634 11:36:06 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:22:00.634 ************************************ 00:22:00.634 END TEST bdev_json_nonarray 00:22:00.634 ************************************ 00:22:00.634 11:36:06 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:22:00.634 11:36:06 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:22:00.634 11:36:06 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:22:00.634 11:36:06 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:22:00.634 11:36:06 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:22:00.634 11:36:06 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:22:00.634 11:36:06 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:00.634 11:36:06 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:22:00.634 11:36:06 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:22:00.634 11:36:06 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:22:00.634 11:36:06 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:22:00.634 11:36:06 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:01.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:01.769 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:01.769 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:02.027 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:22:02.027 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:22:02.027 00:22:02.027 real 1m2.063s 00:22:02.027 user 1m46.638s 00:22:02.027 sys 0m29.057s 00:22:02.027 11:36:07 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:02.027 11:36:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:02.027 ************************************ 00:22:02.027 END TEST blockdev_xnvme 00:22:02.027 ************************************ 00:22:02.027 11:36:07 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:22:02.027 11:36:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:02.027 11:36:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:02.027 11:36:07 -- common/autotest_common.sh@10 -- # set +x 00:22:02.027 ************************************ 00:22:02.027 START TEST ublk 00:22:02.027 ************************************ 00:22:02.027 11:36:07 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:22:02.286 * Looking for test storage... 00:22:02.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:22:02.286 11:36:07 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:02.286 11:36:07 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:22:02.286 11:36:07 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:02.286 11:36:07 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:02.286 11:36:07 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:02.286 11:36:07 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:02.286 11:36:07 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:02.286 11:36:07 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.286 11:36:07 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:22:02.286 11:36:07 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:22:02.286 11:36:07 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:22:02.286 11:36:07 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:22:02.286 11:36:07 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:22:02.286 11:36:07 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:22:02.286 11:36:07 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:02.286 11:36:07 ublk -- scripts/common.sh@344 -- # case "$op" in 00:22:02.286 11:36:07 ublk -- scripts/common.sh@345 -- # : 1 00:22:02.286 11:36:07 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:02.286 11:36:07 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.286 11:36:07 ublk -- scripts/common.sh@365 -- # decimal 1 00:22:02.286 11:36:07 ublk -- scripts/common.sh@353 -- # local d=1 00:22:02.286 11:36:07 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.286 11:36:07 ublk -- scripts/common.sh@355 -- # echo 1 00:22:02.286 11:36:07 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:22:02.286 11:36:07 ublk -- scripts/common.sh@366 -- # decimal 2 00:22:02.286 11:36:07 ublk -- scripts/common.sh@353 -- # local d=2 00:22:02.286 11:36:07 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.286 11:36:07 ublk -- scripts/common.sh@355 -- # echo 2 00:22:02.286 11:36:07 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:22:02.286 11:36:07 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.286 11:36:07 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:02.286 11:36:07 ublk -- scripts/common.sh@368 -- # return 0 00:22:02.286 11:36:07 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.286 11:36:07 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:02.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.286 --rc genhtml_branch_coverage=1 00:22:02.286 --rc genhtml_function_coverage=1 00:22:02.286 --rc genhtml_legend=1 00:22:02.286 --rc geninfo_all_blocks=1 00:22:02.286 --rc geninfo_unexecuted_blocks=1 00:22:02.286 00:22:02.286 ' 00:22:02.286 11:36:07 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:02.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.286 --rc genhtml_branch_coverage=1 00:22:02.286 --rc genhtml_function_coverage=1 00:22:02.286 --rc genhtml_legend=1 00:22:02.286 --rc geninfo_all_blocks=1 00:22:02.286 --rc geninfo_unexecuted_blocks=1 00:22:02.286 00:22:02.286 ' 00:22:02.286 11:36:07 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:02.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.286 --rc genhtml_branch_coverage=1 00:22:02.286 --rc genhtml_function_coverage=1 00:22:02.286 --rc genhtml_legend=1 00:22:02.286 --rc geninfo_all_blocks=1 00:22:02.286 --rc geninfo_unexecuted_blocks=1 00:22:02.286 00:22:02.286 ' 00:22:02.286 11:36:07 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:02.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.286 --rc genhtml_branch_coverage=1 00:22:02.286 --rc genhtml_function_coverage=1 00:22:02.286 --rc genhtml_legend=1 00:22:02.286 --rc geninfo_all_blocks=1 00:22:02.286 --rc geninfo_unexecuted_blocks=1 00:22:02.286 00:22:02.286 ' 00:22:02.286 11:36:07 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:22:02.286 11:36:07 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:22:02.286 11:36:07 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:22:02.286 11:36:07 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:22:02.286 11:36:07 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:22:02.286 11:36:07 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:22:02.286 11:36:07 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:22:02.286 11:36:07 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:22:02.286 11:36:07 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:22:02.286 11:36:07 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:22:02.287 11:36:07 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:22:02.287 11:36:07 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:22:02.287 11:36:07 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:22:02.287 11:36:07 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:22:02.287 11:36:07 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:22:02.287 11:36:07 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:22:02.287 11:36:07 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:22:02.287 11:36:07 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:22:02.287 11:36:07 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:22:02.287 11:36:07 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:22:02.287 11:36:07 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:02.287 11:36:07 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:02.287 11:36:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:02.287 ************************************ 00:22:02.287 START TEST test_save_ublk_config 00:22:02.287 ************************************ 00:22:02.287 11:36:07 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:22:02.287 11:36:07 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:22:02.287 11:36:07 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75957 00:22:02.287 11:36:07 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:22:02.287 11:36:07 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:22:02.287 11:36:07 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75957 00:22:02.287 11:36:07 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75957 ']' 00:22:02.287 11:36:07 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.287 11:36:07 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.287 11:36:07 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.287 11:36:07 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.287 11:36:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:02.545 [2024-11-20 11:36:08.136722] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:22:02.545 [2024-11-20 11:36:08.137577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75957 ] 00:22:02.804 [2024-11-20 11:36:08.343134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.804 [2024-11-20 11:36:08.507400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.176 11:36:09 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.176 11:36:09 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:22:04.176 11:36:09 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:22:04.176 11:36:09 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:22:04.176 11:36:09 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.176 11:36:09 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:04.176 [2024-11-20 11:36:09.510510] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:04.176 [2024-11-20 11:36:09.511932] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:04.176 malloc0 00:22:04.176 [2024-11-20 11:36:09.604009] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:22:04.176 [2024-11-20 11:36:09.604135] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:22:04.176 [2024-11-20 11:36:09.604151] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:04.176 [2024-11-20 11:36:09.604162] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:04.176 [2024-11-20 11:36:09.611517] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:04.176 [2024-11-20 11:36:09.611554] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:04.176 [2024-11-20 11:36:09.619524] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:04.176 [2024-11-20 11:36:09.619665] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:04.176 [2024-11-20 11:36:09.643526] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:04.176 0 00:22:04.176 11:36:09 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.176 11:36:09 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:22:04.176 11:36:09 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.176 11:36:09 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:04.435 11:36:09 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.435 11:36:09 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:22:04.435 "subsystems": [ 00:22:04.435 { 00:22:04.435 "subsystem": "fsdev", 00:22:04.435 "config": [ 00:22:04.435 { 00:22:04.435 "method": "fsdev_set_opts", 00:22:04.435 "params": { 00:22:04.435 "fsdev_io_pool_size": 65535, 00:22:04.435 "fsdev_io_cache_size": 256 00:22:04.435 } 00:22:04.435 } 00:22:04.435 ] 00:22:04.435 }, 00:22:04.435 { 00:22:04.435 "subsystem": "keyring", 00:22:04.435 "config": [] 00:22:04.435 }, 00:22:04.435 { 00:22:04.435 "subsystem": "iobuf", 00:22:04.435 "config": [ 00:22:04.435 { 00:22:04.435 "method": "iobuf_set_options", 00:22:04.435 "params": { 00:22:04.435 "small_pool_count": 8192, 00:22:04.435 "large_pool_count": 1024, 00:22:04.435 "small_bufsize": 8192, 00:22:04.435 "large_bufsize": 135168, 00:22:04.435 "enable_numa": false 00:22:04.435 } 00:22:04.435 } 00:22:04.435 ] 00:22:04.435 }, 00:22:04.435 { 00:22:04.435 "subsystem": "sock", 00:22:04.435 "config": [ 00:22:04.435 { 00:22:04.435 "method": "sock_set_default_impl", 00:22:04.435 "params": { 00:22:04.435 "impl_name": "posix" 00:22:04.435 } 00:22:04.435 }, 00:22:04.435 { 00:22:04.435 "method": "sock_impl_set_options", 00:22:04.435 "params": { 00:22:04.435 "impl_name": "ssl", 00:22:04.435 "recv_buf_size": 4096, 00:22:04.435 "send_buf_size": 4096, 00:22:04.435 "enable_recv_pipe": true, 00:22:04.435 "enable_quickack": false, 00:22:04.435 "enable_placement_id": 0, 00:22:04.435 "enable_zerocopy_send_server": true, 00:22:04.435 "enable_zerocopy_send_client": false, 00:22:04.435 "zerocopy_threshold": 0, 00:22:04.435 "tls_version": 0, 00:22:04.435 "enable_ktls": false 00:22:04.435 } 00:22:04.435 }, 00:22:04.435 { 00:22:04.435 "method": "sock_impl_set_options", 00:22:04.435 "params": { 00:22:04.435 "impl_name": "posix", 00:22:04.435 "recv_buf_size": 2097152, 00:22:04.435 "send_buf_size": 2097152, 00:22:04.435 "enable_recv_pipe": true, 00:22:04.435 "enable_quickack": false, 00:22:04.435 "enable_placement_id": 0, 00:22:04.435 "enable_zerocopy_send_server": true, 00:22:04.435 "enable_zerocopy_send_client": false, 00:22:04.435 "zerocopy_threshold": 0, 00:22:04.435 "tls_version": 0, 00:22:04.435 "enable_ktls": false 00:22:04.435 } 00:22:04.435 } 00:22:04.435 ] 00:22:04.435 }, 00:22:04.435 { 00:22:04.435 "subsystem": "vmd", 00:22:04.435 "config": [] 00:22:04.435 }, 00:22:04.435 { 00:22:04.435 "subsystem": "accel", 00:22:04.435 "config": [ 00:22:04.435 { 00:22:04.435 "method": "accel_set_options", 00:22:04.435 "params": { 00:22:04.435 "small_cache_size": 128, 00:22:04.435 "large_cache_size": 16, 00:22:04.435 "task_count": 2048, 00:22:04.435 "sequence_count": 2048, 00:22:04.435 "buf_count": 2048 00:22:04.435 } 00:22:04.435 } 00:22:04.435 ] 00:22:04.435 }, 00:22:04.435 { 00:22:04.435 "subsystem": "bdev", 00:22:04.435 "config": [ 00:22:04.435 { 00:22:04.435 "method": "bdev_set_options", 00:22:04.435 "params": { 00:22:04.435 "bdev_io_pool_size": 65535, 00:22:04.435 "bdev_io_cache_size": 256, 00:22:04.435 "bdev_auto_examine": true, 00:22:04.435 "iobuf_small_cache_size": 128, 00:22:04.435 "iobuf_large_cache_size": 16 00:22:04.435 } 00:22:04.435 }, 00:22:04.435 { 00:22:04.435 "method": "bdev_raid_set_options", 00:22:04.435 "params": { 00:22:04.435 "process_window_size_kb": 1024, 00:22:04.435 "process_max_bandwidth_mb_sec": 0 00:22:04.435 } 00:22:04.435 }, 00:22:04.435 { 00:22:04.435 "method": "bdev_iscsi_set_options", 00:22:04.435 "params": { 00:22:04.435 "timeout_sec": 30 00:22:04.435 } 00:22:04.435 }, 00:22:04.435 { 00:22:04.435 "method": "bdev_nvme_set_options", 00:22:04.435 "params": { 00:22:04.435 "action_on_timeout": "none", 00:22:04.435 "timeout_us": 0, 00:22:04.435 "timeout_admin_us": 0, 00:22:04.435 "keep_alive_timeout_ms": 10000, 00:22:04.435 "arbitration_burst": 0, 00:22:04.435 "low_priority_weight": 0, 00:22:04.435 "medium_priority_weight": 0, 00:22:04.435 "high_priority_weight": 0, 00:22:04.435 "nvme_adminq_poll_period_us": 10000, 00:22:04.435 "nvme_ioq_poll_period_us": 0, 00:22:04.435 "io_queue_requests": 0, 00:22:04.435 "delay_cmd_submit": true, 00:22:04.435 "transport_retry_count": 4, 00:22:04.435 "bdev_retry_count": 3, 00:22:04.435 "transport_ack_timeout": 0, 00:22:04.435 "ctrlr_loss_timeout_sec": 0, 00:22:04.435 "reconnect_delay_sec": 0, 00:22:04.435 "fast_io_fail_timeout_sec": 0, 00:22:04.435 "disable_auto_failback": false, 00:22:04.435 "generate_uuids": false, 00:22:04.435 "transport_tos": 0, 00:22:04.435 "nvme_error_stat": false, 00:22:04.435 "rdma_srq_size": 0, 00:22:04.435 "io_path_stat": false, 00:22:04.435 "allow_accel_sequence": false, 00:22:04.435 "rdma_max_cq_size": 0, 00:22:04.435 "rdma_cm_event_timeout_ms": 0, 00:22:04.435 "dhchap_digests": [ 00:22:04.435 "sha256", 00:22:04.435 "sha384", 00:22:04.435 "sha512" 00:22:04.435 ], 00:22:04.435 "dhchap_dhgroups": [ 00:22:04.435 "null", 00:22:04.435 "ffdhe2048", 00:22:04.435 "ffdhe3072", 00:22:04.435 "ffdhe4096", 00:22:04.435 "ffdhe6144", 00:22:04.435 "ffdhe8192" 00:22:04.435 ] 00:22:04.435 } 00:22:04.435 }, 00:22:04.435 { 00:22:04.435 "method": "bdev_nvme_set_hotplug", 00:22:04.435 "params": { 00:22:04.435 "period_us": 100000, 00:22:04.435 "enable": false 00:22:04.435 } 00:22:04.435 }, 00:22:04.435 { 00:22:04.435 "method": "bdev_malloc_create", 00:22:04.435 "params": { 00:22:04.435 "name": "malloc0", 00:22:04.435 "num_blocks": 8192, 00:22:04.435 "block_size": 4096, 00:22:04.435 "physical_block_size": 4096, 00:22:04.435 "uuid": "3149e47b-c743-4674-931d-aa50b80cf1eb", 00:22:04.436 "optimal_io_boundary": 0, 00:22:04.436 "md_size": 0, 00:22:04.436 "dif_type": 0, 00:22:04.436 "dif_is_head_of_md": false, 00:22:04.436 "dif_pi_format": 0 00:22:04.436 } 00:22:04.436 }, 00:22:04.436 { 00:22:04.436 "method": "bdev_wait_for_examine" 00:22:04.436 } 00:22:04.436 ] 00:22:04.436 }, 00:22:04.436 { 00:22:04.436 "subsystem": "scsi", 00:22:04.436 "config": null 00:22:04.436 }, 00:22:04.436 { 00:22:04.436 "subsystem": "scheduler", 00:22:04.436 "config": [ 00:22:04.436 { 00:22:04.436 "method": "framework_set_scheduler", 00:22:04.436 "params": { 00:22:04.436 "name": "static" 00:22:04.436 } 00:22:04.436 } 00:22:04.436 ] 00:22:04.436 }, 00:22:04.436 { 00:22:04.436 "subsystem": "vhost_scsi", 00:22:04.436 "config": [] 00:22:04.436 }, 00:22:04.436 { 00:22:04.436 "subsystem": "vhost_blk", 00:22:04.436 "config": [] 00:22:04.436 }, 00:22:04.436 { 00:22:04.436 "subsystem": "ublk", 00:22:04.436 "config": [ 00:22:04.436 { 00:22:04.436 "method": "ublk_create_target", 00:22:04.436 "params": { 00:22:04.436 "cpumask": "1" 00:22:04.436 } 00:22:04.436 }, 00:22:04.436 { 00:22:04.436 "method": "ublk_start_disk", 00:22:04.436 "params": { 00:22:04.436 "bdev_name": "malloc0", 00:22:04.436 "ublk_id": 0, 00:22:04.436 "num_queues": 1, 00:22:04.436 "queue_depth": 128 00:22:04.436 } 00:22:04.436 } 00:22:04.436 ] 00:22:04.436 }, 00:22:04.436 { 00:22:04.436 "subsystem": "nbd", 00:22:04.436 "config": [] 00:22:04.436 }, 00:22:04.436 { 00:22:04.436 "subsystem": "nvmf", 00:22:04.436 "config": [ 00:22:04.436 { 00:22:04.436 "method": "nvmf_set_config", 00:22:04.436 "params": { 00:22:04.436 "discovery_filter": "match_any", 00:22:04.436 "admin_cmd_passthru": { 00:22:04.436 "identify_ctrlr": false 00:22:04.436 }, 00:22:04.436 "dhchap_digests": [ 00:22:04.436 "sha256", 00:22:04.436 "sha384", 00:22:04.436 "sha512" 00:22:04.436 ], 00:22:04.436 "dhchap_dhgroups": [ 00:22:04.436 "null", 00:22:04.436 "ffdhe2048", 00:22:04.436 "ffdhe3072", 00:22:04.436 "ffdhe4096", 00:22:04.436 "ffdhe6144", 00:22:04.436 "ffdhe8192" 00:22:04.436 ] 00:22:04.436 } 00:22:04.436 }, 00:22:04.436 { 00:22:04.436 "method": "nvmf_set_max_subsystems", 00:22:04.436 "params": { 00:22:04.436 "max_subsystems": 1024 00:22:04.436 } 00:22:04.436 }, 00:22:04.436 { 00:22:04.436 "method": "nvmf_set_crdt", 00:22:04.436 "params": { 00:22:04.436 "crdt1": 0, 00:22:04.436 "crdt2": 0, 00:22:04.436 "crdt3": 0 00:22:04.436 } 00:22:04.436 } 00:22:04.436 ] 00:22:04.436 }, 00:22:04.436 { 00:22:04.436 "subsystem": "iscsi", 00:22:04.436 "config": [ 00:22:04.436 { 00:22:04.436 "method": "iscsi_set_options", 00:22:04.436 "params": { 00:22:04.436 "node_base": "iqn.2016-06.io.spdk", 00:22:04.436 "max_sessions": 128, 00:22:04.436 "max_connections_per_session": 2, 00:22:04.436 "max_queue_depth": 64, 00:22:04.436 "default_time2wait": 2, 00:22:04.436 "default_time2retain": 20, 00:22:04.436 "first_burst_length": 8192, 00:22:04.436 "immediate_data": true, 00:22:04.436 "allow_duplicated_isid": false, 00:22:04.436 "error_recovery_level": 0, 00:22:04.436 "nop_timeout": 60, 00:22:04.436 "nop_in_interval": 30, 00:22:04.436 "disable_chap": false, 00:22:04.436 "require_chap": false, 00:22:04.436 "mutual_chap": false, 00:22:04.436 "chap_group": 0, 00:22:04.436 "max_large_datain_per_connection": 64, 00:22:04.436 "max_r2t_per_connection": 4, 00:22:04.436 "pdu_pool_size": 36864, 00:22:04.436 "immediate_data_pool_size": 16384, 00:22:04.436 "data_out_pool_size": 2048 00:22:04.436 } 00:22:04.436 } 00:22:04.436 ] 00:22:04.436 } 00:22:04.436 ] 00:22:04.436 }' 00:22:04.436 11:36:09 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75957 00:22:04.436 11:36:09 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75957 ']' 00:22:04.436 11:36:09 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75957 00:22:04.436 11:36:09 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:22:04.436 11:36:09 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.436 11:36:09 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75957 00:22:04.436 killing process with pid 75957 00:22:04.436 11:36:09 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:04.436 11:36:09 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:04.436 11:36:09 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75957' 00:22:04.436 11:36:09 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75957 00:22:04.436 11:36:09 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75957 00:22:06.337 [2024-11-20 11:36:11.689852] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:06.337 [2024-11-20 11:36:11.725574] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:06.337 [2024-11-20 11:36:11.725750] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:06.337 [2024-11-20 11:36:11.734542] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:06.337 [2024-11-20 11:36:11.734626] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:06.337 [2024-11-20 11:36:11.734652] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:06.337 [2024-11-20 11:36:11.734693] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:06.337 [2024-11-20 11:36:11.734884] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:08.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.262 11:36:13 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=76030 00:22:08.262 11:36:13 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 76030 00:22:08.262 11:36:13 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 76030 ']' 00:22:08.262 11:36:13 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.262 11:36:13 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.262 11:36:13 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.262 11:36:13 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.262 11:36:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:08.262 11:36:13 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:22:08.262 11:36:13 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:22:08.262 "subsystems": [ 00:22:08.262 { 00:22:08.262 "subsystem": "fsdev", 00:22:08.262 "config": [ 00:22:08.262 { 00:22:08.262 "method": "fsdev_set_opts", 00:22:08.262 "params": { 00:22:08.262 "fsdev_io_pool_size": 65535, 00:22:08.262 "fsdev_io_cache_size": 256 00:22:08.262 } 00:22:08.262 } 00:22:08.262 ] 00:22:08.262 }, 00:22:08.262 { 00:22:08.262 "subsystem": "keyring", 00:22:08.262 "config": [] 00:22:08.262 }, 00:22:08.262 { 00:22:08.262 "subsystem": "iobuf", 00:22:08.262 "config": [ 00:22:08.262 { 00:22:08.262 "method": "iobuf_set_options", 00:22:08.262 "params": { 00:22:08.262 "small_pool_count": 8192, 00:22:08.262 "large_pool_count": 1024, 00:22:08.262 "small_bufsize": 8192, 00:22:08.262 "large_bufsize": 135168, 00:22:08.262 "enable_numa": false 00:22:08.262 } 00:22:08.262 } 00:22:08.262 ] 00:22:08.262 }, 00:22:08.262 { 00:22:08.262 "subsystem": "sock", 00:22:08.262 "config": [ 00:22:08.262 { 00:22:08.262 "method": "sock_set_default_impl", 00:22:08.262 "params": { 00:22:08.262 "impl_name": "posix" 00:22:08.262 } 00:22:08.262 }, 00:22:08.262 { 00:22:08.262 "method": "sock_impl_set_options", 00:22:08.262 "params": { 00:22:08.262 "impl_name": "ssl", 00:22:08.262 "recv_buf_size": 4096, 00:22:08.262 "send_buf_size": 4096, 00:22:08.262 "enable_recv_pipe": true, 00:22:08.262 "enable_quickack": false, 00:22:08.262 "enable_placement_id": 0, 00:22:08.262 "enable_zerocopy_send_server": true, 00:22:08.262 "enable_zerocopy_send_client": false, 00:22:08.262 "zerocopy_threshold": 0, 00:22:08.262 "tls_version": 0, 00:22:08.262 "enable_ktls": false 00:22:08.262 } 00:22:08.262 }, 00:22:08.262 { 00:22:08.262 "method": "sock_impl_set_options", 00:22:08.262 "params": { 00:22:08.262 "impl_name": "posix", 00:22:08.262 "recv_buf_size": 2097152, 00:22:08.262 "send_buf_size": 2097152, 00:22:08.262 "enable_recv_pipe": true, 00:22:08.262 "enable_quickack": false, 00:22:08.262 "enable_placement_id": 0, 00:22:08.262 "enable_zerocopy_send_server": true, 00:22:08.262 "enable_zerocopy_send_client": false, 00:22:08.262 "zerocopy_threshold": 0, 00:22:08.262 "tls_version": 0, 00:22:08.262 "enable_ktls": false 00:22:08.262 } 00:22:08.262 } 00:22:08.262 ] 00:22:08.262 }, 00:22:08.262 { 00:22:08.262 "subsystem": "vmd", 00:22:08.262 "config": [] 00:22:08.262 }, 00:22:08.262 { 00:22:08.262 "subsystem": "accel", 00:22:08.262 "config": [ 00:22:08.262 { 00:22:08.262 "method": "accel_set_options", 00:22:08.262 "params": { 00:22:08.262 "small_cache_size": 128, 00:22:08.262 "large_cache_size": 16, 00:22:08.262 "task_count": 2048, 00:22:08.262 "sequence_count": 2048, 00:22:08.262 "buf_count": 2048 00:22:08.262 } 00:22:08.262 } 00:22:08.262 ] 00:22:08.262 }, 00:22:08.262 { 00:22:08.262 "subsystem": "bdev", 00:22:08.262 "config": [ 00:22:08.262 { 00:22:08.262 "method": "bdev_set_options", 00:22:08.262 "params": { 00:22:08.262 "bdev_io_pool_size": 65535, 00:22:08.262 "bdev_io_cache_size": 256, 00:22:08.262 "bdev_auto_examine": true, 00:22:08.262 "iobuf_small_cache_size": 128, 00:22:08.262 "iobuf_large_cache_size": 16 00:22:08.262 } 00:22:08.262 }, 00:22:08.262 { 00:22:08.262 "method": "bdev_raid_set_options", 00:22:08.262 "params": { 00:22:08.262 "process_window_size_kb": 1024, 00:22:08.262 "process_max_bandwidth_mb_sec": 0 00:22:08.262 } 00:22:08.262 }, 00:22:08.262 { 00:22:08.262 "method": "bdev_iscsi_set_options", 00:22:08.262 "params": { 00:22:08.262 "timeout_sec": 30 00:22:08.262 } 00:22:08.262 }, 00:22:08.262 { 00:22:08.263 "method": "bdev_nvme_set_options", 00:22:08.263 "params": { 00:22:08.263 "action_on_timeout": "none", 00:22:08.263 "timeout_us": 0, 00:22:08.263 "timeout_admin_us": 0, 00:22:08.263 "keep_alive_timeout_ms": 10000, 00:22:08.263 "arbitration_burst": 0, 00:22:08.263 "low_priority_weight": 0, 00:22:08.263 "medium_priority_weight": 0, 00:22:08.263 "high_priority_weight": 0, 00:22:08.263 "nvme_adminq_poll_period_us": 10000, 00:22:08.263 "nvme_ioq_poll_period_us": 0, 00:22:08.263 "io_queue_requests": 0, 00:22:08.263 "delay_cmd_submit": true, 00:22:08.263 "transport_retry_count": 4, 00:22:08.263 "bdev_retry_count": 3, 00:22:08.263 "transport_ack_timeout": 0, 00:22:08.263 "ctrlr_loss_timeout_sec": 0, 00:22:08.263 "reconnect_delay_sec": 0, 00:22:08.263 "fast_io_fail_timeout_sec": 0, 00:22:08.263 "disable_auto_failback": false, 00:22:08.263 "generate_uuids": false, 00:22:08.263 "transport_tos": 0, 00:22:08.263 "nvme_error_stat": false, 00:22:08.263 "rdma_srq_size": 0, 00:22:08.263 "io_path_stat": false, 00:22:08.263 "allow_accel_sequence": false, 00:22:08.263 "rdma_max_cq_size": 0, 00:22:08.263 "rdma_cm_event_timeout_ms": 0, 00:22:08.263 "dhchap_digests": [ 00:22:08.263 "sha256", 00:22:08.263 "sha384", 00:22:08.263 "sha512" 00:22:08.263 ], 00:22:08.263 "dhchap_dhgroups": [ 00:22:08.263 "null", 00:22:08.263 "ffdhe2048", 00:22:08.263 "ffdhe3072", 00:22:08.263 "ffdhe4096", 00:22:08.263 "ffdhe6144", 00:22:08.263 "ffdhe8192" 00:22:08.263 ] 00:22:08.263 } 00:22:08.263 }, 00:22:08.263 { 00:22:08.263 "method": "bdev_nvme_set_hotplug", 00:22:08.263 "params": { 00:22:08.263 "period_us": 100000, 00:22:08.263 "enable": false 00:22:08.263 } 00:22:08.263 }, 00:22:08.263 { 00:22:08.263 "method": "bdev_malloc_create", 00:22:08.263 "params": { 00:22:08.263 "name": "malloc0", 00:22:08.263 "num_blocks": 8192, 00:22:08.263 "block_size": 4096, 00:22:08.263 "physical_block_size": 4096, 00:22:08.263 "uuid": "3149e47b-c743-4674-931d-aa50b80cf1eb", 00:22:08.263 "optimal_io_boundary": 0, 00:22:08.263 "md_size": 0, 00:22:08.263 "dif_type": 0, 00:22:08.263 "dif_is_head_of_md": false, 00:22:08.263 "dif_pi_format": 0 00:22:08.263 } 00:22:08.263 }, 00:22:08.263 { 00:22:08.263 "method": "bdev_wait_for_examine" 00:22:08.263 } 00:22:08.263 ] 00:22:08.263 }, 00:22:08.263 { 00:22:08.263 "subsystem": "scsi", 00:22:08.263 "config": null 00:22:08.263 }, 00:22:08.263 { 00:22:08.263 "subsystem": "scheduler", 00:22:08.263 "config": [ 00:22:08.263 { 00:22:08.263 "method": "framework_set_scheduler", 00:22:08.263 "params": { 00:22:08.263 "name": "static" 00:22:08.263 } 00:22:08.263 } 00:22:08.263 ] 00:22:08.263 }, 00:22:08.263 { 00:22:08.263 "subsystem": "vhost_scsi", 00:22:08.263 "config": [] 00:22:08.263 }, 00:22:08.263 { 00:22:08.263 "subsystem": "vhost_blk", 00:22:08.263 "config": [] 00:22:08.263 }, 00:22:08.263 { 00:22:08.263 "subsystem": "ublk", 00:22:08.263 "config": [ 00:22:08.263 { 00:22:08.263 "method": "ublk_create_target", 00:22:08.263 "params": { 00:22:08.263 "cpumask": "1" 00:22:08.263 } 00:22:08.263 }, 00:22:08.263 { 00:22:08.263 "method": "ublk_start_disk", 00:22:08.263 "params": { 00:22:08.263 "bdev_name": "malloc0", 00:22:08.263 "ublk_id": 0, 00:22:08.263 "num_queues": 1, 00:22:08.263 "queue_depth": 128 00:22:08.263 } 00:22:08.263 } 00:22:08.263 ] 00:22:08.263 }, 00:22:08.263 { 00:22:08.263 "subsystem": "nbd", 00:22:08.263 "config": [] 00:22:08.263 }, 00:22:08.263 { 00:22:08.263 "subsystem": "nvmf", 00:22:08.263 "config": [ 00:22:08.263 { 00:22:08.263 "method": "nvmf_set_config", 00:22:08.263 "params": { 00:22:08.263 "discovery_filter": "match_any", 00:22:08.263 "admin_cmd_passthru": { 00:22:08.263 "identify_ctrlr": false 00:22:08.263 }, 00:22:08.263 "dhchap_digests": [ 00:22:08.263 "sha256", 00:22:08.263 "sha384", 00:22:08.263 "sha512" 00:22:08.263 ], 00:22:08.263 "dhchap_dhgroups": [ 00:22:08.263 "null", 00:22:08.263 "ffdhe2048", 00:22:08.263 "ffdhe3072", 00:22:08.263 "ffdhe4096", 00:22:08.263 "ffdhe6144", 00:22:08.263 "ffdhe8192" 00:22:08.263 ] 00:22:08.263 } 00:22:08.263 }, 00:22:08.263 { 00:22:08.263 "method": "nvmf_set_max_subsystems", 00:22:08.263 "params": { 00:22:08.263 "max_subsystems": 1024 00:22:08.263 } 00:22:08.263 }, 00:22:08.263 { 00:22:08.263 "method": "nvmf_set_crdt", 00:22:08.263 "params": { 00:22:08.263 "crdt1": 0, 00:22:08.263 "crdt2": 0, 00:22:08.263 "crdt3": 0 00:22:08.263 } 00:22:08.263 } 00:22:08.263 ] 00:22:08.263 }, 00:22:08.263 { 00:22:08.263 "subsystem": "iscsi", 00:22:08.263 "config": [ 00:22:08.263 { 00:22:08.263 "method": "iscsi_set_options", 00:22:08.263 "params": { 00:22:08.263 "node_base": "iqn.2016-06.io.spdk", 00:22:08.263 "max_sessions": 128, 00:22:08.263 "max_connections_per_session": 2, 00:22:08.263 "max_queue_depth": 64, 00:22:08.263 "default_time2wait": 2, 00:22:08.263 "default_time2retain": 20, 00:22:08.263 "first_burst_length": 8192, 00:22:08.263 "immediate_data": true, 00:22:08.263 "allow_duplicated_isid": false, 00:22:08.263 "error_recovery_level": 0, 00:22:08.263 "nop_timeout": 60, 00:22:08.263 "nop_in_interval": 30, 00:22:08.263 "disable_chap": false, 00:22:08.263 "require_chap": false, 00:22:08.263 "mutual_chap": false, 00:22:08.263 "chap_group": 0, 00:22:08.263 "max_large_datain_per_connection": 64, 00:22:08.263 "max_r2t_per_connection": 4, 00:22:08.263 "pdu_pool_size": 36864, 00:22:08.263 "immediate_data_pool_size": 16384, 00:22:08.263 "data_out_pool_size": 2048 00:22:08.263 } 00:22:08.263 } 00:22:08.263 ] 00:22:08.263 } 00:22:08.263 ] 00:22:08.263 }' 00:22:08.263 [2024-11-20 11:36:13.978628] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:22:08.263 [2024-11-20 11:36:13.978776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76030 ] 00:22:08.538 [2024-11-20 11:36:14.166177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.796 [2024-11-20 11:36:14.308071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.172 [2024-11-20 11:36:15.510493] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:10.172 [2024-11-20 11:36:15.512040] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:10.172 [2024-11-20 11:36:15.518681] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:22:10.172 [2024-11-20 11:36:15.518800] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:22:10.172 [2024-11-20 11:36:15.518816] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:10.172 [2024-11-20 11:36:15.518826] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:10.172 [2024-11-20 11:36:15.527577] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:10.172 [2024-11-20 11:36:15.527610] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:10.172 [2024-11-20 11:36:15.534523] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:10.172 [2024-11-20 11:36:15.534656] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:10.172 [2024-11-20 11:36:15.551510] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:10.172 11:36:15 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:10.172 11:36:15 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:22:10.172 11:36:15 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:22:10.172 11:36:15 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:22:10.172 11:36:15 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.172 11:36:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:10.172 11:36:15 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.172 11:36:15 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:22:10.172 11:36:15 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:22:10.172 11:36:15 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 76030 00:22:10.172 11:36:15 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 76030 ']' 00:22:10.172 11:36:15 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 76030 00:22:10.172 11:36:15 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:22:10.172 11:36:15 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.172 11:36:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76030 00:22:10.172 11:36:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:10.172 11:36:15 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:10.172 11:36:15 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76030' 00:22:10.172 killing process with pid 76030 00:22:10.172 11:36:15 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 76030 00:22:10.172 11:36:15 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 76030 00:22:12.072 [2024-11-20 11:36:17.476538] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:12.072 [2024-11-20 11:36:17.510658] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:12.072 [2024-11-20 11:36:17.510828] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:12.072 [2024-11-20 11:36:17.520534] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:12.072 [2024-11-20 11:36:17.520616] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:12.072 [2024-11-20 11:36:17.520628] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:12.072 [2024-11-20 11:36:17.520659] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:12.072 [2024-11-20 11:36:17.520861] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:14.028 11:36:19 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:22:14.028 00:22:14.028 real 0m11.603s 00:22:14.028 user 0m9.558s 00:22:14.028 sys 0m3.197s 00:22:14.028 11:36:19 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:14.028 ************************************ 00:22:14.028 END TEST test_save_ublk_config 00:22:14.028 11:36:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:22:14.028 ************************************ 00:22:14.028 11:36:19 ublk -- ublk/ublk.sh@139 -- # spdk_pid=76127 00:22:14.028 11:36:19 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:14.028 11:36:19 ublk -- ublk/ublk.sh@141 -- # waitforlisten 76127 00:22:14.028 11:36:19 ublk -- common/autotest_common.sh@835 -- # '[' -z 76127 ']' 00:22:14.028 11:36:19 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:14.028 11:36:19 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.028 11:36:19 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:14.028 11:36:19 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.028 11:36:19 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:14.028 11:36:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:14.028 [2024-11-20 11:36:19.748701] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:22:14.028 [2024-11-20 11:36:19.748864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76127 ] 00:22:14.286 [2024-11-20 11:36:19.940161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:14.545 [2024-11-20 11:36:20.136625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.545 [2024-11-20 11:36:20.136657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.499 11:36:21 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:15.499 11:36:21 ublk -- common/autotest_common.sh@868 -- # return 0 00:22:15.499 11:36:21 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:22:15.499 11:36:21 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:15.499 11:36:21 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:15.499 11:36:21 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:15.499 ************************************ 00:22:15.499 START TEST test_create_ublk 00:22:15.499 ************************************ 00:22:15.499 11:36:21 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:22:15.499 11:36:21 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:22:15.499 11:36:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.499 11:36:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:15.499 [2024-11-20 11:36:21.220509] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:15.499 [2024-11-20 11:36:21.224350] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:15.499 11:36:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.499 11:36:21 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:22:15.499 11:36:21 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:22:15.499 11:36:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.499 11:36:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:16.067 11:36:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.067 11:36:21 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:22:16.067 11:36:21 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:22:16.067 11:36:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.067 11:36:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:16.067 [2024-11-20 11:36:21.576747] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:22:16.067 [2024-11-20 11:36:21.577322] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:22:16.067 [2024-11-20 11:36:21.577351] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:16.067 [2024-11-20 11:36:21.577362] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:16.067 [2024-11-20 11:36:21.585024] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:16.067 [2024-11-20 11:36:21.585086] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:16.067 [2024-11-20 11:36:21.592533] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:16.067 [2024-11-20 11:36:21.602611] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:16.067 [2024-11-20 11:36:21.627555] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:16.067 11:36:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.067 11:36:21 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:22:16.067 11:36:21 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:22:16.067 11:36:21 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:22:16.067 11:36:21 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.067 11:36:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:16.067 11:36:21 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.067 11:36:21 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:22:16.067 { 00:22:16.067 "ublk_device": "/dev/ublkb0", 00:22:16.067 "id": 0, 00:22:16.067 "queue_depth": 512, 00:22:16.067 "num_queues": 4, 00:22:16.067 "bdev_name": "Malloc0" 00:22:16.067 } 00:22:16.067 ]' 00:22:16.067 11:36:21 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:22:16.067 11:36:21 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:22:16.067 11:36:21 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:22:16.067 11:36:21 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:22:16.067 11:36:21 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:22:16.067 11:36:21 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:22:16.067 11:36:21 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:22:16.067 11:36:21 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:22:16.067 11:36:21 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:22:16.326 11:36:21 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:22:16.326 11:36:21 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:22:16.326 11:36:21 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:22:16.326 11:36:21 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:22:16.326 11:36:21 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:22:16.326 11:36:21 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:22:16.326 11:36:21 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:22:16.326 11:36:21 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:22:16.326 11:36:21 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:22:16.326 11:36:21 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:22:16.326 11:36:21 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:22:16.326 11:36:21 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:22:16.326 11:36:21 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:22:16.326 fio: verification read phase will never start because write phase uses all of runtime 00:22:16.326 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:22:16.326 fio-3.35 00:22:16.326 Starting 1 process 00:22:28.527 00:22:28.527 fio_test: (groupid=0, jobs=1): err= 0: pid=76179: Wed Nov 20 11:36:32 2024 00:22:28.527 write: IOPS=13.3k, BW=52.0MiB/s (54.5MB/s)(520MiB/10001msec); 0 zone resets 00:22:28.527 clat (usec): min=44, max=4023, avg=74.03, stdev=112.35 00:22:28.527 lat (usec): min=45, max=4024, avg=74.62, stdev=112.37 00:22:28.527 clat percentiles (usec): 00:22:28.527 | 1.00th=[ 51], 5.00th=[ 60], 10.00th=[ 62], 20.00th=[ 64], 00:22:28.527 | 30.00th=[ 66], 40.00th=[ 67], 50.00th=[ 69], 60.00th=[ 70], 00:22:28.527 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 80], 95.00th=[ 85], 00:22:28.527 | 99.00th=[ 100], 99.50th=[ 111], 99.90th=[ 2442], 99.95th=[ 3097], 00:22:28.527 | 99.99th=[ 3818] 00:22:28.527 bw ( KiB/s): min=49648, max=59041, per=100.00%, avg=53303.63, stdev=2342.57, samples=19 00:22:28.527 iops : min=12412, max=14760, avg=13325.89, stdev=585.61, samples=19 00:22:28.527 lat (usec) : 50=0.39%, 100=98.65%, 250=0.73%, 500=0.01%, 750=0.01% 00:22:28.527 lat (usec) : 1000=0.02% 00:22:28.527 lat (msec) : 2=0.07%, 4=0.12%, 10=0.01% 00:22:28.527 cpu : usr=2.85%, sys=10.29%, ctx=133172, majf=0, minf=796 00:22:28.527 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:28.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:28.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:28.527 issued rwts: total=0,133171,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:28.527 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:28.527 00:22:28.527 Run status group 0 (all jobs): 00:22:28.527 WRITE: bw=52.0MiB/s (54.5MB/s), 52.0MiB/s-52.0MiB/s (54.5MB/s-54.5MB/s), io=520MiB (545MB), run=10001-10001msec 00:22:28.527 00:22:28.527 Disk stats (read/write): 00:22:28.527 ublkb0: ios=0/131794, merge=0/0, ticks=0/8650, in_queue=8651, util=99.06% 00:22:28.527 11:36:32 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:22:28.527 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.527 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.527 [2024-11-20 11:36:32.104468] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:28.527 [2024-11-20 11:36:32.142055] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:28.527 [2024-11-20 11:36:32.143087] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:28.527 [2024-11-20 11:36:32.156504] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:28.527 [2024-11-20 11:36:32.156885] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:28.527 [2024-11-20 11:36:32.156913] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:28.527 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.527 11:36:32 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:22:28.527 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:22:28.527 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:22:28.527 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:28.527 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.528 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:28.528 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.528 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:22:28.528 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.528 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.528 [2024-11-20 11:36:32.165646] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:22:28.528 request: 00:22:28.528 { 00:22:28.528 "ublk_id": 0, 00:22:28.528 "method": "ublk_stop_disk", 00:22:28.528 "req_id": 1 00:22:28.528 } 00:22:28.528 Got JSON-RPC error response 00:22:28.528 response: 00:22:28.528 { 00:22:28.528 "code": -19, 00:22:28.528 "message": "No such device" 00:22:28.528 } 00:22:28.528 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:28.528 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:22:28.528 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:28.528 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:28.528 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:28.528 11:36:32 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:22:28.528 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.528 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.528 [2024-11-20 11:36:32.175644] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:28.528 [2024-11-20 11:36:32.183645] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:28.528 [2024-11-20 11:36:32.183730] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:28.528 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.528 11:36:32 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:28.528 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.528 11:36:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.528 11:36:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.528 11:36:33 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:22:28.528 11:36:33 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:28.528 11:36:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.528 11:36:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.528 11:36:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.528 11:36:33 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:22:28.528 11:36:33 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:22:28.528 11:36:33 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:22:28.528 11:36:33 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:22:28.528 11:36:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.528 11:36:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.528 11:36:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.528 11:36:33 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:22:28.528 11:36:33 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:22:28.528 ************************************ 00:22:28.528 END TEST test_create_ublk 00:22:28.528 ************************************ 00:22:28.528 11:36:33 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:22:28.528 00:22:28.528 real 0m11.949s 00:22:28.528 user 0m0.644s 00:22:28.528 sys 0m1.150s 00:22:28.528 11:36:33 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:28.528 11:36:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.528 11:36:33 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:22:28.528 11:36:33 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:28.528 11:36:33 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:28.528 11:36:33 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.528 ************************************ 00:22:28.528 START TEST test_create_multi_ublk 00:22:28.528 ************************************ 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.528 [2024-11-20 11:36:33.215499] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:28.528 [2024-11-20 11:36:33.218592] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.528 [2024-11-20 11:36:33.540738] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:22:28.528 [2024-11-20 11:36:33.541360] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:22:28.528 [2024-11-20 11:36:33.541387] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:22:28.528 [2024-11-20 11:36:33.541404] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:22:28.528 [2024-11-20 11:36:33.548940] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:28.528 [2024-11-20 11:36:33.549002] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:28.528 [2024-11-20 11:36:33.556555] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:28.528 [2024-11-20 11:36:33.557398] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:22:28.528 [2024-11-20 11:36:33.571624] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.528 11:36:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.528 [2024-11-20 11:36:33.920707] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:22:28.528 [2024-11-20 11:36:33.921266] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:22:28.528 [2024-11-20 11:36:33.921285] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:28.528 [2024-11-20 11:36:33.921295] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:22:28.528 [2024-11-20 11:36:33.928623] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:28.529 [2024-11-20 11:36:33.928867] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:28.529 [2024-11-20 11:36:33.936543] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:28.529 [2024-11-20 11:36:33.937336] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:22:28.529 [2024-11-20 11:36:33.945581] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:22:28.529 11:36:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.529 11:36:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:22:28.529 11:36:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:28.529 11:36:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:22:28.529 11:36:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.529 11:36:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.529 11:36:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.529 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:22:28.529 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:22:28.529 11:36:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.529 11:36:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:28.787 [2024-11-20 11:36:34.291694] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:22:28.787 [2024-11-20 11:36:34.292247] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:22:28.787 [2024-11-20 11:36:34.292262] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:22:28.787 [2024-11-20 11:36:34.292274] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:22:28.787 [2024-11-20 11:36:34.299596] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:28.787 [2024-11-20 11:36:34.299641] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:28.787 [2024-11-20 11:36:34.307529] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:28.787 [2024-11-20 11:36:34.308324] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:22:28.787 [2024-11-20 11:36:34.311436] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:22:28.787 11:36:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.787 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:22:28.787 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:28.787 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:22:28.787 11:36:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.787 11:36:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:29.046 11:36:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.046 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:22:29.046 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:22:29.046 11:36:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.046 11:36:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:29.046 [2024-11-20 11:36:34.651752] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:22:29.046 [2024-11-20 11:36:34.652357] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:22:29.046 [2024-11-20 11:36:34.652388] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:22:29.046 [2024-11-20 11:36:34.652399] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:22:29.046 [2024-11-20 11:36:34.659554] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:29.046 [2024-11-20 11:36:34.659589] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:29.046 [2024-11-20 11:36:34.667546] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:29.046 [2024-11-20 11:36:34.668298] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:22:29.046 [2024-11-20 11:36:34.683542] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:22:29.046 11:36:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.046 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:22:29.046 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:22:29.046 11:36:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.046 11:36:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:29.046 11:36:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.046 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:22:29.046 { 00:22:29.046 "ublk_device": "/dev/ublkb0", 00:22:29.046 "id": 0, 00:22:29.046 "queue_depth": 512, 00:22:29.046 "num_queues": 4, 00:22:29.046 "bdev_name": "Malloc0" 00:22:29.046 }, 00:22:29.046 { 00:22:29.046 "ublk_device": "/dev/ublkb1", 00:22:29.046 "id": 1, 00:22:29.046 "queue_depth": 512, 00:22:29.046 "num_queues": 4, 00:22:29.046 "bdev_name": "Malloc1" 00:22:29.046 }, 00:22:29.046 { 00:22:29.046 "ublk_device": "/dev/ublkb2", 00:22:29.046 "id": 2, 00:22:29.046 "queue_depth": 512, 00:22:29.046 "num_queues": 4, 00:22:29.046 "bdev_name": "Malloc2" 00:22:29.046 }, 00:22:29.046 { 00:22:29.046 "ublk_device": "/dev/ublkb3", 00:22:29.046 "id": 3, 00:22:29.046 "queue_depth": 512, 00:22:29.046 "num_queues": 4, 00:22:29.046 "bdev_name": "Malloc3" 00:22:29.046 } 00:22:29.046 ]' 00:22:29.046 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:22:29.046 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:29.046 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:22:29.046 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:22:29.046 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:22:29.304 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:22:29.304 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:22:29.304 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:29.304 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:22:29.304 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:29.304 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:22:29.304 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:22:29.304 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:29.304 11:36:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:22:29.304 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:22:29.304 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:22:29.304 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:22:29.563 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:22:29.563 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:29.563 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:22:29.563 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:29.563 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:22:29.563 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:22:29.563 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:29.563 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:22:29.563 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:22:29.563 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:22:29.563 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:22:29.563 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:22:29.822 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:29.822 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:22:29.822 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:29.822 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:22:29.822 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:22:29.822 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:29.822 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:22:29.822 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:22:29.822 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:22:29.822 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:22:29.822 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:22:29.822 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:22:29.822 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:22:30.080 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:22:30.080 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:22:30.080 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:22:30.080 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:22:30.080 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:22:30.080 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:30.080 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:22:30.080 11:36:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.080 11:36:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:30.080 [2024-11-20 11:36:35.675665] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:22:30.080 [2024-11-20 11:36:35.713931] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:30.080 [2024-11-20 11:36:35.715333] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:22:30.080 [2024-11-20 11:36:35.722527] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:30.080 [2024-11-20 11:36:35.722869] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:22:30.080 [2024-11-20 11:36:35.722904] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:22:30.080 11:36:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.080 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:30.080 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:22:30.080 11:36:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.080 11:36:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:30.080 [2024-11-20 11:36:35.737632] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:22:30.080 [2024-11-20 11:36:35.778571] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:30.080 [2024-11-20 11:36:35.779720] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:22:30.080 [2024-11-20 11:36:35.787583] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:30.080 [2024-11-20 11:36:35.787953] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:22:30.080 [2024-11-20 11:36:35.787976] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:22:30.080 11:36:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.080 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:30.080 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:22:30.080 11:36:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.080 11:36:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:30.080 [2024-11-20 11:36:35.801679] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:22:30.380 [2024-11-20 11:36:35.848561] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:30.380 [2024-11-20 11:36:35.849552] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:22:30.380 [2024-11-20 11:36:35.856538] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:30.380 [2024-11-20 11:36:35.856878] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:22:30.380 [2024-11-20 11:36:35.856894] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:22:30.380 11:36:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.380 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:30.380 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:22:30.380 11:36:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.380 11:36:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:30.380 [2024-11-20 11:36:35.872666] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:22:30.380 [2024-11-20 11:36:35.914571] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:30.380 [2024-11-20 11:36:35.915569] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:22:30.380 [2024-11-20 11:36:35.923502] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:30.380 [2024-11-20 11:36:35.923875] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:22:30.380 [2024-11-20 11:36:35.923901] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:22:30.380 11:36:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.380 11:36:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:22:30.674 [2024-11-20 11:36:36.217637] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:30.674 [2024-11-20 11:36:36.225909] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:30.674 [2024-11-20 11:36:36.225975] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:30.674 11:36:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:22:30.674 11:36:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:30.674 11:36:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:30.674 11:36:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.674 11:36:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:31.610 11:36:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.610 11:36:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:31.610 11:36:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:31.610 11:36:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.610 11:36:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:31.869 11:36:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.869 11:36:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:31.869 11:36:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:22:31.869 11:36:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.869 11:36:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:32.436 11:36:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.436 11:36:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:22:32.436 11:36:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:22:32.436 11:36:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.436 11:36:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:32.694 11:36:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.694 11:36:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:22:32.694 11:36:38 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:22:32.694 11:36:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.694 11:36:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:32.694 11:36:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.694 11:36:38 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:22:32.694 11:36:38 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:22:32.694 11:36:38 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:22:32.694 11:36:38 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:22:32.694 11:36:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.694 11:36:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:32.694 11:36:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.694 11:36:38 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:22:32.694 11:36:38 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:22:32.694 ************************************ 00:22:32.694 END TEST test_create_multi_ublk 00:22:32.694 ************************************ 00:22:32.694 11:36:38 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:22:32.694 00:22:32.694 real 0m5.236s 00:22:32.694 user 0m1.221s 00:22:32.694 sys 0m0.212s 00:22:32.694 11:36:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:32.694 11:36:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:22:32.954 11:36:38 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:32.954 11:36:38 ublk -- ublk/ublk.sh@147 -- # cleanup 00:22:32.954 11:36:38 ublk -- ublk/ublk.sh@130 -- # killprocess 76127 00:22:32.954 11:36:38 ublk -- common/autotest_common.sh@954 -- # '[' -z 76127 ']' 00:22:32.954 11:36:38 ublk -- common/autotest_common.sh@958 -- # kill -0 76127 00:22:32.954 11:36:38 ublk -- common/autotest_common.sh@959 -- # uname 00:22:32.954 11:36:38 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.954 11:36:38 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76127 00:22:32.954 killing process with pid 76127 00:22:32.954 11:36:38 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:32.954 11:36:38 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:32.954 11:36:38 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76127' 00:22:32.954 11:36:38 ublk -- common/autotest_common.sh@973 -- # kill 76127 00:22:32.954 11:36:38 ublk -- common/autotest_common.sh@978 -- # wait 76127 00:22:34.330 [2024-11-20 11:36:39.948431] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:34.330 [2024-11-20 11:36:39.948531] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:36.230 00:22:36.230 real 0m33.729s 00:22:36.230 user 0m48.893s 00:22:36.230 sys 0m10.625s 00:22:36.230 11:36:41 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:36.231 ************************************ 00:22:36.231 END TEST ublk 00:22:36.231 ************************************ 00:22:36.231 11:36:41 ublk -- common/autotest_common.sh@10 -- # set +x 00:22:36.231 11:36:41 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:22:36.231 11:36:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:36.231 11:36:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:36.231 11:36:41 -- common/autotest_common.sh@10 -- # set +x 00:22:36.231 ************************************ 00:22:36.231 START TEST ublk_recovery 00:22:36.231 ************************************ 00:22:36.231 11:36:41 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:22:36.231 * Looking for test storage... 00:22:36.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:22:36.231 11:36:41 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:36.231 11:36:41 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:22:36.231 11:36:41 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:36.231 11:36:41 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:36.231 11:36:41 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:22:36.231 11:36:41 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:36.231 11:36:41 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:36.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.231 --rc genhtml_branch_coverage=1 00:22:36.231 --rc genhtml_function_coverage=1 00:22:36.231 --rc genhtml_legend=1 00:22:36.231 --rc geninfo_all_blocks=1 00:22:36.231 --rc geninfo_unexecuted_blocks=1 00:22:36.231 00:22:36.231 ' 00:22:36.231 11:36:41 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:36.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.231 --rc genhtml_branch_coverage=1 00:22:36.231 --rc genhtml_function_coverage=1 00:22:36.231 --rc genhtml_legend=1 00:22:36.231 --rc geninfo_all_blocks=1 00:22:36.231 --rc geninfo_unexecuted_blocks=1 00:22:36.231 00:22:36.231 ' 00:22:36.231 11:36:41 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:36.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.231 --rc genhtml_branch_coverage=1 00:22:36.231 --rc genhtml_function_coverage=1 00:22:36.231 --rc genhtml_legend=1 00:22:36.231 --rc geninfo_all_blocks=1 00:22:36.231 --rc geninfo_unexecuted_blocks=1 00:22:36.231 00:22:36.231 ' 00:22:36.231 11:36:41 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:36.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:36.231 --rc genhtml_branch_coverage=1 00:22:36.231 --rc genhtml_function_coverage=1 00:22:36.231 --rc genhtml_legend=1 00:22:36.231 --rc geninfo_all_blocks=1 00:22:36.231 --rc geninfo_unexecuted_blocks=1 00:22:36.231 00:22:36.231 ' 00:22:36.231 11:36:41 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:22:36.231 11:36:41 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:22:36.231 11:36:41 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:22:36.231 11:36:41 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:22:36.231 11:36:41 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:22:36.231 11:36:41 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:22:36.231 11:36:41 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:22:36.231 11:36:41 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:22:36.231 11:36:41 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:22:36.231 11:36:41 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:22:36.231 11:36:41 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76563 00:22:36.231 11:36:41 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:36.231 11:36:41 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76563 00:22:36.231 11:36:41 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:36.231 11:36:41 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76563 ']' 00:22:36.231 11:36:41 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.231 11:36:41 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:36.231 11:36:41 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.231 11:36:41 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:36.231 11:36:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.231 [2024-11-20 11:36:41.907875] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:22:36.231 [2024-11-20 11:36:41.908303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76563 ] 00:22:36.489 [2024-11-20 11:36:42.170444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:36.748 [2024-11-20 11:36:42.314334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.748 [2024-11-20 11:36:42.314364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.681 11:36:43 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:37.681 11:36:43 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:22:37.681 11:36:43 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:22:37.681 11:36:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.681 11:36:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.681 [2024-11-20 11:36:43.408501] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:37.681 [2024-11-20 11:36:43.411654] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:37.681 11:36:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.681 11:36:43 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:22:37.681 11:36:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.681 11:36:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.940 malloc0 00:22:37.940 11:36:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.940 11:36:43 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:22:37.940 11:36:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.940 11:36:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.940 [2024-11-20 11:36:43.583729] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:22:37.940 [2024-11-20 11:36:43.583900] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:22:37.940 [2024-11-20 11:36:43.583919] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:37.940 [2024-11-20 11:36:43.583934] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:22:37.940 [2024-11-20 11:36:43.592686] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:22:37.940 [2024-11-20 11:36:43.592733] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:22:37.940 [2024-11-20 11:36:43.599598] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:22:37.940 [2024-11-20 11:36:43.599881] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:22:37.940 [2024-11-20 11:36:43.615531] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:22:37.940 1 00:22:37.940 11:36:43 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.940 11:36:43 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:22:38.945 11:36:44 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76604 00:22:38.945 11:36:44 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:22:38.945 11:36:44 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:22:39.203 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:39.203 fio-3.35 00:22:39.203 Starting 1 process 00:22:44.476 11:36:49 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76563 00:22:44.476 11:36:49 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:22:49.747 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76563 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:22:49.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.747 11:36:54 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76712 00:22:49.747 11:36:54 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.747 11:36:54 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:22:49.747 11:36:54 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76712 00:22:49.747 11:36:54 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76712 ']' 00:22:49.747 11:36:54 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.747 11:36:54 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.747 11:36:54 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.747 11:36:54 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.747 11:36:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.747 [2024-11-20 11:36:54.751441] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:22:49.747 [2024-11-20 11:36:54.751609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76712 ] 00:22:49.747 [2024-11-20 11:36:54.930087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:49.747 [2024-11-20 11:36:55.066819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.747 [2024-11-20 11:36:55.066851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.316 11:36:56 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.316 11:36:56 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:22:50.316 11:36:56 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:22:50.316 11:36:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.316 11:36:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.575 [2024-11-20 11:36:56.079498] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:22:50.575 [2024-11-20 11:36:56.082671] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:22:50.575 11:36:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.575 11:36:56 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:22:50.575 11:36:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.575 11:36:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.575 malloc0 00:22:50.575 11:36:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.575 11:36:56 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:22:50.575 11:36:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.575 11:36:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.575 [2024-11-20 11:36:56.257678] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:22:50.575 [2024-11-20 11:36:56.257728] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:22:50.575 [2024-11-20 11:36:56.257743] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:22:50.575 [2024-11-20 11:36:56.265533] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:22:50.575 [2024-11-20 11:36:56.265566] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:22:50.575 [2024-11-20 11:36:56.265577] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:22:50.575 [2024-11-20 11:36:56.265683] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:22:50.575 1 00:22:50.575 11:36:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.575 11:36:56 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76604 00:22:50.575 [2024-11-20 11:36:56.273508] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:22:50.575 [2024-11-20 11:36:56.278009] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:22:50.575 [2024-11-20 11:36:56.287753] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:22:50.575 [2024-11-20 11:36:56.287792] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:23:46.795 00:23:46.795 fio_test: (groupid=0, jobs=1): err= 0: pid=76607: Wed Nov 20 11:37:44 2024 00:23:46.795 read: IOPS=18.6k, BW=72.8MiB/s (76.3MB/s)(4367MiB/60002msec) 00:23:46.795 slat (usec): min=2, max=381, avg= 6.99, stdev= 2.45 00:23:46.795 clat (usec): min=1074, max=6667.9k, avg=3421.86, stdev=53470.20 00:23:46.795 lat (usec): min=1081, max=6667.9k, avg=3428.85, stdev=53470.19 00:23:46.795 clat percentiles (usec): 00:23:46.795 | 1.00th=[ 2212], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2671], 00:23:46.795 | 30.00th=[ 2737], 40.00th=[ 2802], 50.00th=[ 2868], 60.00th=[ 2933], 00:23:46.795 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3621], 95.00th=[ 4228], 00:23:46.795 | 99.00th=[ 5669], 99.50th=[ 6325], 99.90th=[ 7504], 99.95th=[ 8586], 00:23:46.795 | 99.99th=[13304] 00:23:46.795 bw ( KiB/s): min= 6304, max=98768, per=100.00%, avg=82796.26, stdev=11639.03, samples=107 00:23:46.795 iops : min= 1576, max=24692, avg=20699.05, stdev=2909.75, samples=107 00:23:46.795 write: IOPS=18.6k, BW=72.7MiB/s (76.3MB/s)(4365MiB/60002msec); 0 zone resets 00:23:46.795 slat (usec): min=2, max=418, avg= 7.04, stdev= 2.52 00:23:46.795 clat (usec): min=865, max=6668.1k, avg=3433.80, stdev=47172.47 00:23:46.795 lat (usec): min=871, max=6668.1k, avg=3440.84, stdev=47172.47 00:23:46.795 clat percentiles (usec): 00:23:46.795 | 1.00th=[ 2278], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2769], 00:23:46.795 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3064], 00:23:46.795 | 70.00th=[ 3130], 80.00th=[ 3228], 90.00th=[ 3654], 95.00th=[ 4228], 00:23:46.795 | 99.00th=[ 5669], 99.50th=[ 6325], 99.90th=[ 7439], 99.95th=[ 8160], 00:23:46.795 | 99.99th=[13304] 00:23:46.795 bw ( KiB/s): min= 6104, max=99416, per=100.00%, avg=82761.12, stdev=11715.82, samples=107 00:23:46.795 iops : min= 1526, max=24854, avg=20690.26, stdev=2928.95, samples=107 00:23:46.795 lat (usec) : 1000=0.01% 00:23:46.795 lat (msec) : 2=0.20%, 4=93.42%, 10=6.36%, 20=0.02%, >=2000=0.01% 00:23:46.795 cpu : usr=9.93%, sys=25.95%, ctx=75818, majf=0, minf=13 00:23:46.795 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:23:46.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:46.795 issued rwts: total=1118061,1117409,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:46.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:46.795 00:23:46.795 Run status group 0 (all jobs): 00:23:46.795 READ: bw=72.8MiB/s (76.3MB/s), 72.8MiB/s-72.8MiB/s (76.3MB/s-76.3MB/s), io=4367MiB (4580MB), run=60002-60002msec 00:23:46.795 WRITE: bw=72.7MiB/s (76.3MB/s), 72.7MiB/s-72.7MiB/s (76.3MB/s-76.3MB/s), io=4365MiB (4577MB), run=60002-60002msec 00:23:46.795 00:23:46.795 Disk stats (read/write): 00:23:46.795 ublkb1: ios=1115266/1114711, merge=0/0, ticks=3717955/3597569, in_queue=7315524, util=99.94% 00:23:46.795 11:37:44 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:23:46.795 11:37:44 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.795 11:37:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.795 [2024-11-20 11:37:44.901509] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:23:46.795 [2024-11-20 11:37:44.932606] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:23:46.795 [2024-11-20 11:37:44.936685] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:23:46.795 [2024-11-20 11:37:44.944565] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:23:46.795 [2024-11-20 11:37:44.944687] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:23:46.795 [2024-11-20 11:37:44.944702] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:23:46.795 11:37:44 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.795 11:37:44 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:23:46.795 11:37:44 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.795 11:37:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.795 [2024-11-20 11:37:44.960627] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:46.795 [2024-11-20 11:37:44.968503] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:46.795 [2024-11-20 11:37:44.968549] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:23:46.795 11:37:44 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.795 11:37:44 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:23:46.795 11:37:44 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:23:46.795 11:37:44 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76712 00:23:46.795 11:37:44 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76712 ']' 00:23:46.795 11:37:44 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76712 00:23:46.795 11:37:44 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:23:46.795 11:37:44 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.795 11:37:44 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76712 00:23:46.795 killing process with pid 76712 00:23:46.795 11:37:45 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:46.795 11:37:45 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:46.795 11:37:45 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76712' 00:23:46.795 11:37:45 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76712 00:23:46.795 11:37:45 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76712 00:23:46.795 [2024-11-20 11:37:46.689142] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:23:46.795 [2024-11-20 11:37:46.689207] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:23:46.795 00:23:46.795 real 1m6.649s 00:23:46.795 user 1m48.856s 00:23:46.795 sys 0m35.106s 00:23:46.795 ************************************ 00:23:46.795 END TEST ublk_recovery 00:23:46.795 ************************************ 00:23:46.795 11:37:48 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:46.795 11:37:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.795 11:37:48 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:23:46.795 11:37:48 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:23:46.795 11:37:48 -- spdk/autotest.sh@260 -- # timing_exit lib 00:23:46.795 11:37:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:46.795 11:37:48 -- common/autotest_common.sh@10 -- # set +x 00:23:46.795 11:37:48 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:23:46.795 11:37:48 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:23:46.795 11:37:48 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:23:46.795 11:37:48 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:46.795 11:37:48 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:46.795 11:37:48 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:46.795 11:37:48 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:23:46.795 11:37:48 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:46.795 11:37:48 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:46.795 11:37:48 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:23:46.795 11:37:48 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:46.795 11:37:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:46.795 11:37:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.795 11:37:48 -- common/autotest_common.sh@10 -- # set +x 00:23:46.795 ************************************ 00:23:46.795 START TEST ftl 00:23:46.795 ************************************ 00:23:46.795 11:37:48 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:46.795 * Looking for test storage... 00:23:46.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:46.795 11:37:48 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:46.795 11:37:48 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:46.795 11:37:48 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:23:46.795 11:37:48 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:46.795 11:37:48 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:46.795 11:37:48 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:46.795 11:37:48 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:46.795 11:37:48 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.795 11:37:48 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:23:46.795 11:37:48 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:23:46.795 11:37:48 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:23:46.795 11:37:48 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:23:46.795 11:37:48 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:23:46.795 11:37:48 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:23:46.795 11:37:48 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:46.795 11:37:48 ftl -- scripts/common.sh@344 -- # case "$op" in 00:23:46.795 11:37:48 ftl -- scripts/common.sh@345 -- # : 1 00:23:46.796 11:37:48 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:46.796 11:37:48 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.796 11:37:48 ftl -- scripts/common.sh@365 -- # decimal 1 00:23:46.796 11:37:48 ftl -- scripts/common.sh@353 -- # local d=1 00:23:46.796 11:37:48 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.796 11:37:48 ftl -- scripts/common.sh@355 -- # echo 1 00:23:46.796 11:37:48 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:23:46.796 11:37:48 ftl -- scripts/common.sh@366 -- # decimal 2 00:23:46.796 11:37:48 ftl -- scripts/common.sh@353 -- # local d=2 00:23:46.796 11:37:48 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:46.796 11:37:48 ftl -- scripts/common.sh@355 -- # echo 2 00:23:46.796 11:37:48 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:23:46.796 11:37:48 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.796 11:37:48 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:46.796 11:37:48 ftl -- scripts/common.sh@368 -- # return 0 00:23:46.796 11:37:48 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.796 11:37:48 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:46.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.796 --rc genhtml_branch_coverage=1 00:23:46.796 --rc genhtml_function_coverage=1 00:23:46.796 --rc genhtml_legend=1 00:23:46.796 --rc geninfo_all_blocks=1 00:23:46.796 --rc geninfo_unexecuted_blocks=1 00:23:46.796 00:23:46.796 ' 00:23:46.796 11:37:48 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:46.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.796 --rc genhtml_branch_coverage=1 00:23:46.796 --rc genhtml_function_coverage=1 00:23:46.796 --rc genhtml_legend=1 00:23:46.796 --rc geninfo_all_blocks=1 00:23:46.796 --rc geninfo_unexecuted_blocks=1 00:23:46.796 00:23:46.796 ' 00:23:46.796 11:37:48 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:46.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.796 --rc genhtml_branch_coverage=1 00:23:46.796 --rc genhtml_function_coverage=1 00:23:46.796 --rc genhtml_legend=1 00:23:46.796 --rc geninfo_all_blocks=1 00:23:46.796 --rc geninfo_unexecuted_blocks=1 00:23:46.796 00:23:46.796 ' 00:23:46.796 11:37:48 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:46.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.796 --rc genhtml_branch_coverage=1 00:23:46.796 --rc genhtml_function_coverage=1 00:23:46.796 --rc genhtml_legend=1 00:23:46.796 --rc geninfo_all_blocks=1 00:23:46.796 --rc geninfo_unexecuted_blocks=1 00:23:46.796 00:23:46.796 ' 00:23:46.796 11:37:48 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:46.796 11:37:48 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:23:46.796 11:37:48 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:46.796 11:37:48 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:46.796 11:37:48 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:46.796 11:37:48 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:46.796 11:37:48 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:46.796 11:37:48 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:46.796 11:37:48 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:46.796 11:37:48 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:46.796 11:37:48 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:46.796 11:37:48 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:46.796 11:37:48 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:46.796 11:37:48 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:46.796 11:37:48 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:46.796 11:37:48 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:46.796 11:37:48 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:46.796 11:37:48 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:46.796 11:37:48 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:46.796 11:37:48 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:46.796 11:37:48 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:46.796 11:37:48 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:46.796 11:37:48 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:46.796 11:37:48 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:46.796 11:37:48 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:46.796 11:37:48 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:46.796 11:37:48 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:46.796 11:37:48 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:46.796 11:37:48 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:46.796 11:37:48 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:46.796 11:37:48 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:23:46.796 11:37:48 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:23:46.796 11:37:48 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:23:46.796 11:37:48 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:23:46.796 11:37:48 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:46.796 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:46.796 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:46.796 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:46.796 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:46.796 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:46.796 11:37:49 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77509 00:23:46.796 11:37:49 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:23:46.796 11:37:49 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77509 00:23:46.796 11:37:49 ftl -- common/autotest_common.sh@835 -- # '[' -z 77509 ']' 00:23:46.796 11:37:49 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.796 11:37:49 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.796 11:37:49 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.796 11:37:49 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.796 11:37:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:46.796 [2024-11-20 11:37:49.364277] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:23:46.796 [2024-11-20 11:37:49.364738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77509 ] 00:23:46.796 [2024-11-20 11:37:49.569889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.796 [2024-11-20 11:37:49.736528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.796 11:37:50 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.796 11:37:50 ftl -- common/autotest_common.sh@868 -- # return 0 00:23:46.796 11:37:50 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:23:46.796 11:37:50 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:23:46.796 11:37:51 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:23:46.796 11:37:51 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:46.796 11:37:52 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:23:46.796 11:37:52 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:46.796 11:37:52 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:23:46.796 11:37:52 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:23:46.796 11:37:52 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:23:46.796 11:37:52 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:23:46.796 11:37:52 ftl -- ftl/ftl.sh@50 -- # break 00:23:46.796 11:37:52 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:23:46.796 11:37:52 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:23:46.796 11:37:52 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:23:46.796 11:37:52 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:23:47.054 11:37:52 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:23:47.054 11:37:52 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:23:47.054 11:37:52 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:23:47.054 11:37:52 ftl -- ftl/ftl.sh@63 -- # break 00:23:47.054 11:37:52 ftl -- ftl/ftl.sh@66 -- # killprocess 77509 00:23:47.054 11:37:52 ftl -- common/autotest_common.sh@954 -- # '[' -z 77509 ']' 00:23:47.054 11:37:52 ftl -- common/autotest_common.sh@958 -- # kill -0 77509 00:23:47.054 11:37:52 ftl -- common/autotest_common.sh@959 -- # uname 00:23:47.054 11:37:52 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.054 11:37:52 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77509 00:23:47.054 killing process with pid 77509 00:23:47.054 11:37:52 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:47.055 11:37:52 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:47.055 11:37:52 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77509' 00:23:47.055 11:37:52 ftl -- common/autotest_common.sh@973 -- # kill 77509 00:23:47.055 11:37:52 ftl -- common/autotest_common.sh@978 -- # wait 77509 00:23:49.585 11:37:55 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:23:49.585 11:37:55 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:23:49.585 11:37:55 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:49.585 11:37:55 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:49.585 11:37:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:49.585 ************************************ 00:23:49.585 START TEST ftl_fio_basic 00:23:49.585 ************************************ 00:23:49.585 11:37:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:23:49.844 * Looking for test storage... 00:23:49.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:49.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.844 --rc genhtml_branch_coverage=1 00:23:49.844 --rc genhtml_function_coverage=1 00:23:49.844 --rc genhtml_legend=1 00:23:49.844 --rc geninfo_all_blocks=1 00:23:49.844 --rc geninfo_unexecuted_blocks=1 00:23:49.844 00:23:49.844 ' 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:49.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.844 --rc genhtml_branch_coverage=1 00:23:49.844 --rc genhtml_function_coverage=1 00:23:49.844 --rc genhtml_legend=1 00:23:49.844 --rc geninfo_all_blocks=1 00:23:49.844 --rc geninfo_unexecuted_blocks=1 00:23:49.844 00:23:49.844 ' 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:49.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.844 --rc genhtml_branch_coverage=1 00:23:49.844 --rc genhtml_function_coverage=1 00:23:49.844 --rc genhtml_legend=1 00:23:49.844 --rc geninfo_all_blocks=1 00:23:49.844 --rc geninfo_unexecuted_blocks=1 00:23:49.844 00:23:49.844 ' 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:49.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:49.844 --rc genhtml_branch_coverage=1 00:23:49.844 --rc genhtml_function_coverage=1 00:23:49.844 --rc genhtml_legend=1 00:23:49.844 --rc geninfo_all_blocks=1 00:23:49.844 --rc geninfo_unexecuted_blocks=1 00:23:49.844 00:23:49.844 ' 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:23:49.844 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77664 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77664 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77664 ']' 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.845 11:37:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:50.104 [2024-11-20 11:37:55.673298] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:23:50.104 [2024-11-20 11:37:55.673708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77664 ] 00:23:50.362 [2024-11-20 11:37:55.868892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:50.362 [2024-11-20 11:37:56.012431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.362 [2024-11-20 11:37:56.012505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.362 [2024-11-20 11:37:56.012524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.297 11:37:56 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.297 11:37:56 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:23:51.297 11:37:56 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:51.297 11:37:56 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:23:51.297 11:37:56 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:51.297 11:37:56 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:23:51.297 11:37:56 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:23:51.297 11:37:56 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:51.555 11:37:57 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:51.555 11:37:57 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:23:51.555 11:37:57 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:51.555 11:37:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:51.555 11:37:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:51.555 11:37:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:51.555 11:37:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:51.555 11:37:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:52.121 11:37:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:52.121 { 00:23:52.121 "name": "nvme0n1", 00:23:52.121 "aliases": [ 00:23:52.121 "2454beef-7baf-4a5d-9a45-3a334f6636dc" 00:23:52.121 ], 00:23:52.121 "product_name": "NVMe disk", 00:23:52.121 "block_size": 4096, 00:23:52.121 "num_blocks": 1310720, 00:23:52.121 "uuid": "2454beef-7baf-4a5d-9a45-3a334f6636dc", 00:23:52.121 "numa_id": -1, 00:23:52.121 "assigned_rate_limits": { 00:23:52.121 "rw_ios_per_sec": 0, 00:23:52.121 "rw_mbytes_per_sec": 0, 00:23:52.121 "r_mbytes_per_sec": 0, 00:23:52.121 "w_mbytes_per_sec": 0 00:23:52.121 }, 00:23:52.121 "claimed": false, 00:23:52.121 "zoned": false, 00:23:52.121 "supported_io_types": { 00:23:52.121 "read": true, 00:23:52.121 "write": true, 00:23:52.121 "unmap": true, 00:23:52.121 "flush": true, 00:23:52.121 "reset": true, 00:23:52.121 "nvme_admin": true, 00:23:52.121 "nvme_io": true, 00:23:52.121 "nvme_io_md": false, 00:23:52.121 "write_zeroes": true, 00:23:52.121 "zcopy": false, 00:23:52.121 "get_zone_info": false, 00:23:52.121 "zone_management": false, 00:23:52.121 "zone_append": false, 00:23:52.121 "compare": true, 00:23:52.121 "compare_and_write": false, 00:23:52.121 "abort": true, 00:23:52.121 "seek_hole": false, 00:23:52.121 "seek_data": false, 00:23:52.121 "copy": true, 00:23:52.121 "nvme_iov_md": false 00:23:52.121 }, 00:23:52.121 "driver_specific": { 00:23:52.121 "nvme": [ 00:23:52.121 { 00:23:52.121 "pci_address": "0000:00:11.0", 00:23:52.121 "trid": { 00:23:52.121 "trtype": "PCIe", 00:23:52.121 "traddr": "0000:00:11.0" 00:23:52.121 }, 00:23:52.121 "ctrlr_data": { 00:23:52.121 "cntlid": 0, 00:23:52.121 "vendor_id": "0x1b36", 00:23:52.121 "model_number": "QEMU NVMe Ctrl", 00:23:52.121 "serial_number": "12341", 00:23:52.121 "firmware_revision": "8.0.0", 00:23:52.121 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:52.121 "oacs": { 00:23:52.121 "security": 0, 00:23:52.121 "format": 1, 00:23:52.121 "firmware": 0, 00:23:52.121 "ns_manage": 1 00:23:52.121 }, 00:23:52.121 "multi_ctrlr": false, 00:23:52.121 "ana_reporting": false 00:23:52.121 }, 00:23:52.121 "vs": { 00:23:52.121 "nvme_version": "1.4" 00:23:52.121 }, 00:23:52.121 "ns_data": { 00:23:52.121 "id": 1, 00:23:52.121 "can_share": false 00:23:52.121 } 00:23:52.121 } 00:23:52.121 ], 00:23:52.121 "mp_policy": "active_passive" 00:23:52.121 } 00:23:52.121 } 00:23:52.121 ]' 00:23:52.121 11:37:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:52.121 11:37:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:52.121 11:37:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:52.121 11:37:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:52.122 11:37:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:52.122 11:37:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:23:52.122 11:37:57 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:23:52.122 11:37:57 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:52.122 11:37:57 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:23:52.122 11:37:57 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:52.122 11:37:57 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:52.380 11:37:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:23:52.380 11:37:58 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:52.638 11:37:58 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=abac590a-4cc8-4160-901c-e220f518cf7e 00:23:52.638 11:37:58 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u abac590a-4cc8-4160-901c-e220f518cf7e 00:23:52.897 11:37:58 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=54f7d204-2db0-49c6-8fe5-bc921146b41c 00:23:52.897 11:37:58 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 54f7d204-2db0-49c6-8fe5-bc921146b41c 00:23:52.897 11:37:58 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:23:52.897 11:37:58 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:52.897 11:37:58 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=54f7d204-2db0-49c6-8fe5-bc921146b41c 00:23:52.897 11:37:58 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:23:52.897 11:37:58 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 54f7d204-2db0-49c6-8fe5-bc921146b41c 00:23:52.897 11:37:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=54f7d204-2db0-49c6-8fe5-bc921146b41c 00:23:52.897 11:37:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:52.897 11:37:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:52.897 11:37:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:52.897 11:37:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 54f7d204-2db0-49c6-8fe5-bc921146b41c 00:23:53.475 11:37:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:53.475 { 00:23:53.475 "name": "54f7d204-2db0-49c6-8fe5-bc921146b41c", 00:23:53.475 "aliases": [ 00:23:53.475 "lvs/nvme0n1p0" 00:23:53.475 ], 00:23:53.475 "product_name": "Logical Volume", 00:23:53.475 "block_size": 4096, 00:23:53.475 "num_blocks": 26476544, 00:23:53.475 "uuid": "54f7d204-2db0-49c6-8fe5-bc921146b41c", 00:23:53.475 "assigned_rate_limits": { 00:23:53.475 "rw_ios_per_sec": 0, 00:23:53.475 "rw_mbytes_per_sec": 0, 00:23:53.475 "r_mbytes_per_sec": 0, 00:23:53.475 "w_mbytes_per_sec": 0 00:23:53.475 }, 00:23:53.475 "claimed": false, 00:23:53.475 "zoned": false, 00:23:53.475 "supported_io_types": { 00:23:53.475 "read": true, 00:23:53.475 "write": true, 00:23:53.475 "unmap": true, 00:23:53.475 "flush": false, 00:23:53.475 "reset": true, 00:23:53.475 "nvme_admin": false, 00:23:53.475 "nvme_io": false, 00:23:53.475 "nvme_io_md": false, 00:23:53.475 "write_zeroes": true, 00:23:53.475 "zcopy": false, 00:23:53.475 "get_zone_info": false, 00:23:53.475 "zone_management": false, 00:23:53.475 "zone_append": false, 00:23:53.475 "compare": false, 00:23:53.475 "compare_and_write": false, 00:23:53.475 "abort": false, 00:23:53.475 "seek_hole": true, 00:23:53.475 "seek_data": true, 00:23:53.475 "copy": false, 00:23:53.475 "nvme_iov_md": false 00:23:53.475 }, 00:23:53.475 "driver_specific": { 00:23:53.475 "lvol": { 00:23:53.475 "lvol_store_uuid": "abac590a-4cc8-4160-901c-e220f518cf7e", 00:23:53.475 "base_bdev": "nvme0n1", 00:23:53.475 "thin_provision": true, 00:23:53.475 "num_allocated_clusters": 0, 00:23:53.475 "snapshot": false, 00:23:53.475 "clone": false, 00:23:53.475 "esnap_clone": false 00:23:53.475 } 00:23:53.475 } 00:23:53.475 } 00:23:53.475 ]' 00:23:53.475 11:37:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:53.475 11:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:53.475 11:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:53.475 11:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:53.475 11:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:53.475 11:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:53.475 11:37:59 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:23:53.475 11:37:59 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:23:53.475 11:37:59 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:53.734 11:37:59 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:53.734 11:37:59 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:53.734 11:37:59 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 54f7d204-2db0-49c6-8fe5-bc921146b41c 00:23:53.734 11:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=54f7d204-2db0-49c6-8fe5-bc921146b41c 00:23:53.734 11:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:53.734 11:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:53.734 11:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:53.734 11:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 54f7d204-2db0-49c6-8fe5-bc921146b41c 00:23:53.993 11:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:53.993 { 00:23:53.993 "name": "54f7d204-2db0-49c6-8fe5-bc921146b41c", 00:23:53.993 "aliases": [ 00:23:53.993 "lvs/nvme0n1p0" 00:23:53.993 ], 00:23:53.993 "product_name": "Logical Volume", 00:23:53.993 "block_size": 4096, 00:23:53.993 "num_blocks": 26476544, 00:23:53.993 "uuid": "54f7d204-2db0-49c6-8fe5-bc921146b41c", 00:23:53.993 "assigned_rate_limits": { 00:23:53.993 "rw_ios_per_sec": 0, 00:23:53.993 "rw_mbytes_per_sec": 0, 00:23:53.993 "r_mbytes_per_sec": 0, 00:23:53.993 "w_mbytes_per_sec": 0 00:23:53.993 }, 00:23:53.993 "claimed": false, 00:23:53.993 "zoned": false, 00:23:53.993 "supported_io_types": { 00:23:53.993 "read": true, 00:23:53.993 "write": true, 00:23:53.993 "unmap": true, 00:23:53.993 "flush": false, 00:23:53.993 "reset": true, 00:23:53.993 "nvme_admin": false, 00:23:53.993 "nvme_io": false, 00:23:53.993 "nvme_io_md": false, 00:23:53.993 "write_zeroes": true, 00:23:53.993 "zcopy": false, 00:23:53.993 "get_zone_info": false, 00:23:53.993 "zone_management": false, 00:23:53.993 "zone_append": false, 00:23:53.993 "compare": false, 00:23:53.993 "compare_and_write": false, 00:23:53.993 "abort": false, 00:23:53.993 "seek_hole": true, 00:23:53.993 "seek_data": true, 00:23:53.993 "copy": false, 00:23:53.993 "nvme_iov_md": false 00:23:53.993 }, 00:23:53.993 "driver_specific": { 00:23:53.993 "lvol": { 00:23:53.993 "lvol_store_uuid": "abac590a-4cc8-4160-901c-e220f518cf7e", 00:23:53.993 "base_bdev": "nvme0n1", 00:23:53.993 "thin_provision": true, 00:23:53.993 "num_allocated_clusters": 0, 00:23:53.993 "snapshot": false, 00:23:53.993 "clone": false, 00:23:53.993 "esnap_clone": false 00:23:53.993 } 00:23:53.993 } 00:23:53.993 } 00:23:53.993 ]' 00:23:53.993 11:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:54.251 11:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:54.251 11:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:54.251 11:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:54.251 11:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:54.251 11:37:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:54.251 11:37:59 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:23:54.251 11:37:59 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:54.510 11:38:00 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:23:54.510 11:38:00 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:23:54.510 11:38:00 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:23:54.510 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:23:54.510 11:38:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 54f7d204-2db0-49c6-8fe5-bc921146b41c 00:23:54.510 11:38:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=54f7d204-2db0-49c6-8fe5-bc921146b41c 00:23:54.510 11:38:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:54.510 11:38:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:23:54.510 11:38:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:23:54.510 11:38:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 54f7d204-2db0-49c6-8fe5-bc921146b41c 00:23:55.077 11:38:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:55.077 { 00:23:55.077 "name": "54f7d204-2db0-49c6-8fe5-bc921146b41c", 00:23:55.077 "aliases": [ 00:23:55.077 "lvs/nvme0n1p0" 00:23:55.077 ], 00:23:55.077 "product_name": "Logical Volume", 00:23:55.077 "block_size": 4096, 00:23:55.077 "num_blocks": 26476544, 00:23:55.077 "uuid": "54f7d204-2db0-49c6-8fe5-bc921146b41c", 00:23:55.077 "assigned_rate_limits": { 00:23:55.077 "rw_ios_per_sec": 0, 00:23:55.077 "rw_mbytes_per_sec": 0, 00:23:55.077 "r_mbytes_per_sec": 0, 00:23:55.077 "w_mbytes_per_sec": 0 00:23:55.077 }, 00:23:55.077 "claimed": false, 00:23:55.077 "zoned": false, 00:23:55.077 "supported_io_types": { 00:23:55.077 "read": true, 00:23:55.077 "write": true, 00:23:55.077 "unmap": true, 00:23:55.077 "flush": false, 00:23:55.077 "reset": true, 00:23:55.077 "nvme_admin": false, 00:23:55.077 "nvme_io": false, 00:23:55.077 "nvme_io_md": false, 00:23:55.077 "write_zeroes": true, 00:23:55.077 "zcopy": false, 00:23:55.077 "get_zone_info": false, 00:23:55.077 "zone_management": false, 00:23:55.077 "zone_append": false, 00:23:55.077 "compare": false, 00:23:55.077 "compare_and_write": false, 00:23:55.077 "abort": false, 00:23:55.077 "seek_hole": true, 00:23:55.077 "seek_data": true, 00:23:55.077 "copy": false, 00:23:55.077 "nvme_iov_md": false 00:23:55.077 }, 00:23:55.077 "driver_specific": { 00:23:55.077 "lvol": { 00:23:55.077 "lvol_store_uuid": "abac590a-4cc8-4160-901c-e220f518cf7e", 00:23:55.077 "base_bdev": "nvme0n1", 00:23:55.077 "thin_provision": true, 00:23:55.077 "num_allocated_clusters": 0, 00:23:55.077 "snapshot": false, 00:23:55.077 "clone": false, 00:23:55.077 "esnap_clone": false 00:23:55.077 } 00:23:55.077 } 00:23:55.077 } 00:23:55.077 ]' 00:23:55.077 11:38:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:55.077 11:38:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:23:55.077 11:38:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:55.077 11:38:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:55.077 11:38:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:55.077 11:38:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:23:55.077 11:38:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:23:55.077 11:38:00 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:23:55.077 11:38:00 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 54f7d204-2db0-49c6-8fe5-bc921146b41c -c nvc0n1p0 --l2p_dram_limit 60 00:23:55.337 [2024-11-20 11:38:00.969909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.337 [2024-11-20 11:38:00.969974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:55.337 [2024-11-20 11:38:00.970004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:55.337 [2024-11-20 11:38:00.970019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.337 [2024-11-20 11:38:00.970116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.337 [2024-11-20 11:38:00.970138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:55.337 [2024-11-20 11:38:00.970157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:55.337 [2024-11-20 11:38:00.970170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.337 [2024-11-20 11:38:00.970229] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:55.337 [2024-11-20 11:38:00.971651] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:55.337 [2024-11-20 11:38:00.971692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.337 [2024-11-20 11:38:00.971706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:55.337 [2024-11-20 11:38:00.971722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.485 ms 00:23:55.337 [2024-11-20 11:38:00.971734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.337 [2024-11-20 11:38:00.971965] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 675c6f92-2f5e-4f8e-8a0c-85fe782bc9ba 00:23:55.337 [2024-11-20 11:38:00.973692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.337 [2024-11-20 11:38:00.973739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:55.337 [2024-11-20 11:38:00.973754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:55.337 [2024-11-20 11:38:00.973773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.337 [2024-11-20 11:38:00.981876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.337 [2024-11-20 11:38:00.981928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:55.337 [2024-11-20 11:38:00.981944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.977 ms 00:23:55.337 [2024-11-20 11:38:00.981959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.337 [2024-11-20 11:38:00.982129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.337 [2024-11-20 11:38:00.982156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:55.337 [2024-11-20 11:38:00.982170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:23:55.337 [2024-11-20 11:38:00.982195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.337 [2024-11-20 11:38:00.982280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.337 [2024-11-20 11:38:00.982312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:55.337 [2024-11-20 11:38:00.982324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:55.337 [2024-11-20 11:38:00.982340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.337 [2024-11-20 11:38:00.982383] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:55.337 [2024-11-20 11:38:00.988680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.337 [2024-11-20 11:38:00.988735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:55.337 [2024-11-20 11:38:00.988756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.307 ms 00:23:55.337 [2024-11-20 11:38:00.988770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.337 [2024-11-20 11:38:00.988832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.337 [2024-11-20 11:38:00.988844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:55.337 [2024-11-20 11:38:00.988857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:55.337 [2024-11-20 11:38:00.988868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.337 [2024-11-20 11:38:00.988957] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:55.337 [2024-11-20 11:38:00.989141] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:55.337 [2024-11-20 11:38:00.989184] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:55.337 [2024-11-20 11:38:00.989216] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:55.337 [2024-11-20 11:38:00.989253] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:55.337 [2024-11-20 11:38:00.989268] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:55.337 [2024-11-20 11:38:00.989287] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:55.337 [2024-11-20 11:38:00.989299] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:55.337 [2024-11-20 11:38:00.989316] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:55.337 [2024-11-20 11:38:00.989329] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:55.337 [2024-11-20 11:38:00.989347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.337 [2024-11-20 11:38:00.989367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:55.337 [2024-11-20 11:38:00.989382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:23:55.337 [2024-11-20 11:38:00.989395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.337 [2024-11-20 11:38:00.989513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.337 [2024-11-20 11:38:00.989531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:55.337 [2024-11-20 11:38:00.989546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:23:55.337 [2024-11-20 11:38:00.989557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.337 [2024-11-20 11:38:00.989685] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:55.337 [2024-11-20 11:38:00.989720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:55.337 [2024-11-20 11:38:00.989740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:55.337 [2024-11-20 11:38:00.989752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:55.337 [2024-11-20 11:38:00.989767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:55.337 [2024-11-20 11:38:00.989779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:55.337 [2024-11-20 11:38:00.989794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:55.337 [2024-11-20 11:38:00.989807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:55.337 [2024-11-20 11:38:00.989821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:55.337 [2024-11-20 11:38:00.989833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:55.337 [2024-11-20 11:38:00.989848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:55.337 [2024-11-20 11:38:00.989860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:55.337 [2024-11-20 11:38:00.989875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:55.337 [2024-11-20 11:38:00.989887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:55.337 [2024-11-20 11:38:00.989901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:55.337 [2024-11-20 11:38:00.989912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:55.337 [2024-11-20 11:38:00.989931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:55.337 [2024-11-20 11:38:00.989943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:55.337 [2024-11-20 11:38:00.989957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:55.337 [2024-11-20 11:38:00.989969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:55.337 [2024-11-20 11:38:00.989984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:55.337 [2024-11-20 11:38:00.989995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:55.337 [2024-11-20 11:38:00.990009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:55.337 [2024-11-20 11:38:00.990020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:55.337 [2024-11-20 11:38:00.990034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:55.337 [2024-11-20 11:38:00.990045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:55.337 [2024-11-20 11:38:00.990060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:55.337 [2024-11-20 11:38:00.990071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:55.337 [2024-11-20 11:38:00.990085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:55.337 [2024-11-20 11:38:00.990098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:55.337 [2024-11-20 11:38:00.990112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:55.337 [2024-11-20 11:38:00.990124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:55.337 [2024-11-20 11:38:00.990145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:55.337 [2024-11-20 11:38:00.990157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:55.337 [2024-11-20 11:38:00.990174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:55.337 [2024-11-20 11:38:00.990211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:55.337 [2024-11-20 11:38:00.990229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:55.337 [2024-11-20 11:38:00.990242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:55.337 [2024-11-20 11:38:00.990259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:55.338 [2024-11-20 11:38:00.990271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:55.338 [2024-11-20 11:38:00.990290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:55.338 [2024-11-20 11:38:00.990303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:55.338 [2024-11-20 11:38:00.990321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:55.338 [2024-11-20 11:38:00.990332] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:55.338 [2024-11-20 11:38:00.990350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:55.338 [2024-11-20 11:38:00.990363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:55.338 [2024-11-20 11:38:00.990378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:55.338 [2024-11-20 11:38:00.990390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:55.338 [2024-11-20 11:38:00.990407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:55.338 [2024-11-20 11:38:00.990419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:55.338 [2024-11-20 11:38:00.990433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:55.338 [2024-11-20 11:38:00.990444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:55.338 [2024-11-20 11:38:00.990459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:55.338 [2024-11-20 11:38:00.990490] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:55.338 [2024-11-20 11:38:00.990510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:55.338 [2024-11-20 11:38:00.990524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:55.338 [2024-11-20 11:38:00.990540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:55.338 [2024-11-20 11:38:00.990553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:55.338 [2024-11-20 11:38:00.990569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:55.338 [2024-11-20 11:38:00.990582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:55.338 [2024-11-20 11:38:00.990597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:55.338 [2024-11-20 11:38:00.990611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:55.338 [2024-11-20 11:38:00.990627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:55.338 [2024-11-20 11:38:00.990639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:55.338 [2024-11-20 11:38:00.990659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:55.338 [2024-11-20 11:38:00.990672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:55.338 [2024-11-20 11:38:00.990687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:55.338 [2024-11-20 11:38:00.990701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:55.338 [2024-11-20 11:38:00.990717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:55.338 [2024-11-20 11:38:00.990730] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:55.338 [2024-11-20 11:38:00.990749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:55.338 [2024-11-20 11:38:00.990770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:55.338 [2024-11-20 11:38:00.990789] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:55.338 [2024-11-20 11:38:00.990803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:55.338 [2024-11-20 11:38:00.990822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:55.338 [2024-11-20 11:38:00.990837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.338 [2024-11-20 11:38:00.990856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:55.338 [2024-11-20 11:38:00.990869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.221 ms 00:23:55.338 [2024-11-20 11:38:00.990887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.338 [2024-11-20 11:38:00.990976] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:55.338 [2024-11-20 11:38:00.991004] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:59.521 [2024-11-20 11:38:04.890997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.521 [2024-11-20 11:38:04.891074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:59.521 [2024-11-20 11:38:04.891099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3900.000 ms 00:23:59.521 [2024-11-20 11:38:04.891116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.521 [2024-11-20 11:38:04.938075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.521 [2024-11-20 11:38:04.938143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:59.521 [2024-11-20 11:38:04.938163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.553 ms 00:23:59.521 [2024-11-20 11:38:04.938179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.521 [2024-11-20 11:38:04.938403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.521 [2024-11-20 11:38:04.938424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:59.521 [2024-11-20 11:38:04.938438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:23:59.521 [2024-11-20 11:38:04.938457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.521 [2024-11-20 11:38:05.007003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.521 [2024-11-20 11:38:05.007084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:59.521 [2024-11-20 11:38:05.007115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.460 ms 00:23:59.521 [2024-11-20 11:38:05.007140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.521 [2024-11-20 11:38:05.007221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.521 [2024-11-20 11:38:05.007244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:59.521 [2024-11-20 11:38:05.007268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:59.521 [2024-11-20 11:38:05.007304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.521 [2024-11-20 11:38:05.007985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.521 [2024-11-20 11:38:05.008022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:59.521 [2024-11-20 11:38:05.008043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:23:59.521 [2024-11-20 11:38:05.008069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.521 [2024-11-20 11:38:05.008278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.521 [2024-11-20 11:38:05.008317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:59.521 [2024-11-20 11:38:05.008335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:23:59.521 [2024-11-20 11:38:05.008360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.521 [2024-11-20 11:38:05.035301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.521 [2024-11-20 11:38:05.035362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:59.521 [2024-11-20 11:38:05.035381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.897 ms 00:23:59.521 [2024-11-20 11:38:05.035397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.521 [2024-11-20 11:38:05.051423] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:59.521 [2024-11-20 11:38:05.070099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.521 [2024-11-20 11:38:05.070186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:59.521 [2024-11-20 11:38:05.070209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.525 ms 00:23:59.521 [2024-11-20 11:38:05.070227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.521 [2024-11-20 11:38:05.146078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.521 [2024-11-20 11:38:05.146145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:59.521 [2024-11-20 11:38:05.146174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.755 ms 00:23:59.521 [2024-11-20 11:38:05.146187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.521 [2024-11-20 11:38:05.146503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.521 [2024-11-20 11:38:05.146525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:59.521 [2024-11-20 11:38:05.146544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:23:59.521 [2024-11-20 11:38:05.146555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.521 [2024-11-20 11:38:05.189392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.521 [2024-11-20 11:38:05.189455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:59.521 [2024-11-20 11:38:05.189484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.738 ms 00:23:59.521 [2024-11-20 11:38:05.189497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.521 [2024-11-20 11:38:05.228229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.521 [2024-11-20 11:38:05.228284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:59.521 [2024-11-20 11:38:05.228314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.651 ms 00:23:59.521 [2024-11-20 11:38:05.228328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.521 [2024-11-20 11:38:05.229143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.521 [2024-11-20 11:38:05.229176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:59.521 [2024-11-20 11:38:05.229194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:23:59.521 [2024-11-20 11:38:05.229204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.780 [2024-11-20 11:38:05.350234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.780 [2024-11-20 11:38:05.350292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:59.780 [2024-11-20 11:38:05.350318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 120.944 ms 00:23:59.780 [2024-11-20 11:38:05.350334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.780 [2024-11-20 11:38:05.392250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.780 [2024-11-20 11:38:05.392311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:59.780 [2024-11-20 11:38:05.392333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.781 ms 00:23:59.780 [2024-11-20 11:38:05.392345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.780 [2024-11-20 11:38:05.434565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.780 [2024-11-20 11:38:05.434646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:59.780 [2024-11-20 11:38:05.434667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.145 ms 00:23:59.780 [2024-11-20 11:38:05.434679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.780 [2024-11-20 11:38:05.482766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.780 [2024-11-20 11:38:05.482836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:59.780 [2024-11-20 11:38:05.482860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.985 ms 00:23:59.780 [2024-11-20 11:38:05.482874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.780 [2024-11-20 11:38:05.482970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.780 [2024-11-20 11:38:05.482986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:59.780 [2024-11-20 11:38:05.483007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:59.780 [2024-11-20 11:38:05.483029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.780 [2024-11-20 11:38:05.483238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.780 [2024-11-20 11:38:05.483276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:59.780 [2024-11-20 11:38:05.483293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:59.780 [2024-11-20 11:38:05.483306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.780 [2024-11-20 11:38:05.484771] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4514.237 ms, result 0 00:23:59.780 { 00:23:59.780 "name": "ftl0", 00:23:59.780 "uuid": "675c6f92-2f5e-4f8e-8a0c-85fe782bc9ba" 00:23:59.780 } 00:23:59.780 11:38:05 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:23:59.780 11:38:05 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:23:59.780 11:38:05 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:59.780 11:38:05 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:23:59.780 11:38:05 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:59.780 11:38:05 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:59.780 11:38:05 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:00.348 11:38:05 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:24:00.348 [ 00:24:00.348 { 00:24:00.348 "name": "ftl0", 00:24:00.348 "aliases": [ 00:24:00.348 "675c6f92-2f5e-4f8e-8a0c-85fe782bc9ba" 00:24:00.348 ], 00:24:00.348 "product_name": "FTL disk", 00:24:00.348 "block_size": 4096, 00:24:00.348 "num_blocks": 20971520, 00:24:00.348 "uuid": "675c6f92-2f5e-4f8e-8a0c-85fe782bc9ba", 00:24:00.348 "assigned_rate_limits": { 00:24:00.348 "rw_ios_per_sec": 0, 00:24:00.348 "rw_mbytes_per_sec": 0, 00:24:00.348 "r_mbytes_per_sec": 0, 00:24:00.348 "w_mbytes_per_sec": 0 00:24:00.348 }, 00:24:00.348 "claimed": false, 00:24:00.348 "zoned": false, 00:24:00.348 "supported_io_types": { 00:24:00.348 "read": true, 00:24:00.348 "write": true, 00:24:00.348 "unmap": true, 00:24:00.348 "flush": true, 00:24:00.348 "reset": false, 00:24:00.348 "nvme_admin": false, 00:24:00.348 "nvme_io": false, 00:24:00.348 "nvme_io_md": false, 00:24:00.348 "write_zeroes": true, 00:24:00.348 "zcopy": false, 00:24:00.348 "get_zone_info": false, 00:24:00.348 "zone_management": false, 00:24:00.348 "zone_append": false, 00:24:00.348 "compare": false, 00:24:00.348 "compare_and_write": false, 00:24:00.348 "abort": false, 00:24:00.348 "seek_hole": false, 00:24:00.348 "seek_data": false, 00:24:00.348 "copy": false, 00:24:00.348 "nvme_iov_md": false 00:24:00.348 }, 00:24:00.348 "driver_specific": { 00:24:00.348 "ftl": { 00:24:00.348 "base_bdev": "54f7d204-2db0-49c6-8fe5-bc921146b41c", 00:24:00.348 "cache": "nvc0n1p0" 00:24:00.348 } 00:24:00.348 } 00:24:00.348 } 00:24:00.348 ] 00:24:00.607 11:38:06 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:24:00.607 11:38:06 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:24:00.607 11:38:06 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:00.865 11:38:06 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:24:00.865 11:38:06 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:01.124 [2024-11-20 11:38:06.726331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.124 [2024-11-20 11:38:06.726596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:01.124 [2024-11-20 11:38:06.726629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:01.124 [2024-11-20 11:38:06.726646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.125 [2024-11-20 11:38:06.726712] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:01.125 [2024-11-20 11:38:06.732101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.125 [2024-11-20 11:38:06.732150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:01.125 [2024-11-20 11:38:06.732171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.353 ms 00:24:01.125 [2024-11-20 11:38:06.732183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.125 [2024-11-20 11:38:06.732690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.125 [2024-11-20 11:38:06.732707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:01.125 [2024-11-20 11:38:06.732722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:24:01.125 [2024-11-20 11:38:06.732734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.125 [2024-11-20 11:38:06.735908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.125 [2024-11-20 11:38:06.735971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:01.125 [2024-11-20 11:38:06.735989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.138 ms 00:24:01.125 [2024-11-20 11:38:06.736002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.125 [2024-11-20 11:38:06.742182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.125 [2024-11-20 11:38:06.742380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:01.125 [2024-11-20 11:38:06.742422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.128 ms 00:24:01.125 [2024-11-20 11:38:06.742437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.125 [2024-11-20 11:38:06.787940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.125 [2024-11-20 11:38:06.788007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:01.125 [2024-11-20 11:38:06.788029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.310 ms 00:24:01.125 [2024-11-20 11:38:06.788041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.125 [2024-11-20 11:38:06.814584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.125 [2024-11-20 11:38:06.814669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:01.125 [2024-11-20 11:38:06.814691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.397 ms 00:24:01.125 [2024-11-20 11:38:06.814706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.125 [2024-11-20 11:38:06.814957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.125 [2024-11-20 11:38:06.814972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:01.125 [2024-11-20 11:38:06.814987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:24:01.125 [2024-11-20 11:38:06.814997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.125 [2024-11-20 11:38:06.859975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.125 [2024-11-20 11:38:06.860044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:01.125 [2024-11-20 11:38:06.860066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.919 ms 00:24:01.125 [2024-11-20 11:38:06.860078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.385 [2024-11-20 11:38:06.904381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.385 [2024-11-20 11:38:06.904453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:01.385 [2024-11-20 11:38:06.904491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.190 ms 00:24:01.385 [2024-11-20 11:38:06.904504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.385 [2024-11-20 11:38:06.948402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.385 [2024-11-20 11:38:06.948491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:01.385 [2024-11-20 11:38:06.948516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.766 ms 00:24:01.385 [2024-11-20 11:38:06.948530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.385 [2024-11-20 11:38:06.993054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.385 [2024-11-20 11:38:06.993124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:01.385 [2024-11-20 11:38:06.993149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.263 ms 00:24:01.385 [2024-11-20 11:38:06.993163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.385 [2024-11-20 11:38:06.993278] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:01.385 [2024-11-20 11:38:06.993299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.993997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:01.385 [2024-11-20 11:38:06.994012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:01.386 [2024-11-20 11:38:06.994975] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:01.386 [2024-11-20 11:38:06.994990] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 675c6f92-2f5e-4f8e-8a0c-85fe782bc9ba 00:24:01.386 [2024-11-20 11:38:06.995004] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:01.386 [2024-11-20 11:38:06.995021] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:01.386 [2024-11-20 11:38:06.995034] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:01.386 [2024-11-20 11:38:06.995053] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:01.386 [2024-11-20 11:38:06.995065] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:01.386 [2024-11-20 11:38:06.995086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:01.386 [2024-11-20 11:38:06.995098] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:01.386 [2024-11-20 11:38:06.995112] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:01.386 [2024-11-20 11:38:06.995123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:01.386 [2024-11-20 11:38:06.995138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.386 [2024-11-20 11:38:06.995151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:01.386 [2024-11-20 11:38:06.995167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.864 ms 00:24:01.386 [2024-11-20 11:38:06.995179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.386 [2024-11-20 11:38:07.019056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.386 [2024-11-20 11:38:07.019125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:01.386 [2024-11-20 11:38:07.019146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.768 ms 00:24:01.386 [2024-11-20 11:38:07.019160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.386 [2024-11-20 11:38:07.019860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.386 [2024-11-20 11:38:07.019886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:01.386 [2024-11-20 11:38:07.019903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:24:01.386 [2024-11-20 11:38:07.019915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.386 [2024-11-20 11:38:07.102040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.386 [2024-11-20 11:38:07.102102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:01.386 [2024-11-20 11:38:07.102124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.386 [2024-11-20 11:38:07.102137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.386 [2024-11-20 11:38:07.102235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.386 [2024-11-20 11:38:07.102249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:01.386 [2024-11-20 11:38:07.102264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.386 [2024-11-20 11:38:07.102288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.386 [2024-11-20 11:38:07.102475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.386 [2024-11-20 11:38:07.102492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:01.386 [2024-11-20 11:38:07.102530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.386 [2024-11-20 11:38:07.102541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.386 [2024-11-20 11:38:07.102579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.386 [2024-11-20 11:38:07.102592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:01.386 [2024-11-20 11:38:07.102606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.386 [2024-11-20 11:38:07.102617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.646 [2024-11-20 11:38:07.255354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.646 [2024-11-20 11:38:07.255453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:01.646 [2024-11-20 11:38:07.255490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.646 [2024-11-20 11:38:07.255504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.646 [2024-11-20 11:38:07.374378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.646 [2024-11-20 11:38:07.374744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:01.646 [2024-11-20 11:38:07.374792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.646 [2024-11-20 11:38:07.374805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.646 [2024-11-20 11:38:07.374952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.646 [2024-11-20 11:38:07.374967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:01.646 [2024-11-20 11:38:07.374982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.646 [2024-11-20 11:38:07.374998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.646 [2024-11-20 11:38:07.375092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.646 [2024-11-20 11:38:07.375106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:01.646 [2024-11-20 11:38:07.375120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.646 [2024-11-20 11:38:07.375132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.646 [2024-11-20 11:38:07.375312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.646 [2024-11-20 11:38:07.375337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:01.646 [2024-11-20 11:38:07.375362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.646 [2024-11-20 11:38:07.375383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.646 [2024-11-20 11:38:07.375499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.646 [2024-11-20 11:38:07.375519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:01.646 [2024-11-20 11:38:07.375538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.646 [2024-11-20 11:38:07.375554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.646 [2024-11-20 11:38:07.375622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.646 [2024-11-20 11:38:07.375644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:01.646 [2024-11-20 11:38:07.375666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.646 [2024-11-20 11:38:07.375678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.646 [2024-11-20 11:38:07.375753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:01.646 [2024-11-20 11:38:07.375768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:01.646 [2024-11-20 11:38:07.375783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:01.646 [2024-11-20 11:38:07.375795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.646 [2024-11-20 11:38:07.375983] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 649.617 ms, result 0 00:24:01.646 true 00:24:01.646 11:38:07 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77664 00:24:01.646 11:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77664 ']' 00:24:01.905 11:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77664 00:24:01.905 11:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:24:01.905 11:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.905 11:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77664 00:24:01.905 killing process with pid 77664 00:24:01.905 11:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:01.905 11:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:01.905 11:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77664' 00:24:01.905 11:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77664 00:24:01.905 11:38:07 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77664 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:07.177 11:38:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:24:07.177 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:24:07.177 fio-3.35 00:24:07.177 Starting 1 thread 00:24:13.741 00:24:13.741 test: (groupid=0, jobs=1): err= 0: pid=77902: Wed Nov 20 11:38:18 2024 00:24:13.741 read: IOPS=924, BW=61.4MiB/s (64.4MB/s)(255MiB/4146msec) 00:24:13.741 slat (nsec): min=4550, max=37767, avg=7173.08, stdev=3235.70 00:24:13.741 clat (usec): min=315, max=17688, avg=476.34, stdev=288.15 00:24:13.741 lat (usec): min=324, max=17696, avg=483.51, stdev=288.25 00:24:13.741 clat percentiles (usec): 00:24:13.741 | 1.00th=[ 351], 5.00th=[ 371], 10.00th=[ 392], 20.00th=[ 424], 00:24:13.741 | 30.00th=[ 441], 40.00th=[ 449], 50.00th=[ 465], 60.00th=[ 490], 00:24:13.741 | 70.00th=[ 506], 80.00th=[ 519], 90.00th=[ 537], 95.00th=[ 570], 00:24:13.741 | 99.00th=[ 611], 99.50th=[ 644], 99.90th=[ 693], 99.95th=[ 3294], 00:24:13.741 | 99.99th=[17695] 00:24:13.741 write: IOPS=930, BW=61.8MiB/s (64.8MB/s)(256MiB/4142msec); 0 zone resets 00:24:13.741 slat (usec): min=16, max=112, avg=27.16, stdev= 6.34 00:24:13.741 clat (usec): min=368, max=2666, avg=547.24, stdev=74.96 00:24:13.741 lat (usec): min=406, max=2689, avg=574.40, stdev=74.93 00:24:13.741 clat percentiles (usec): 00:24:13.741 | 1.00th=[ 424], 5.00th=[ 453], 10.00th=[ 469], 20.00th=[ 486], 00:24:13.741 | 30.00th=[ 510], 40.00th=[ 529], 50.00th=[ 545], 60.00th=[ 553], 00:24:13.741 | 70.00th=[ 578], 80.00th=[ 603], 90.00th=[ 627], 95.00th=[ 644], 00:24:13.741 | 99.00th=[ 750], 99.50th=[ 816], 99.90th=[ 988], 99.95th=[ 1139], 00:24:13.741 | 99.99th=[ 2671] 00:24:13.741 bw ( KiB/s): min=61472, max=65019, per=100.00%, avg=63377.38, stdev=972.52, samples=8 00:24:13.741 iops : min= 904, max= 956, avg=932.00, stdev=14.26, samples=8 00:24:13.741 lat (usec) : 500=46.13%, 750=53.32%, 1000=0.47% 00:24:13.741 lat (msec) : 2=0.04%, 4=0.03%, 20=0.01% 00:24:13.741 cpu : usr=99.01%, sys=0.17%, ctx=10, majf=0, minf=1169 00:24:13.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:13.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:13.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:13.741 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:13.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:13.741 00:24:13.741 Run status group 0 (all jobs): 00:24:13.741 READ: bw=61.4MiB/s (64.4MB/s), 61.4MiB/s-61.4MiB/s (64.4MB/s-64.4MB/s), io=255MiB (267MB), run=4146-4146msec 00:24:13.741 WRITE: bw=61.8MiB/s (64.8MB/s), 61.8MiB/s-61.8MiB/s (64.8MB/s-64.8MB/s), io=256MiB (269MB), run=4142-4142msec 00:24:15.118 ----------------------------------------------------- 00:24:15.118 Suppressions used: 00:24:15.118 count bytes template 00:24:15.118 1 5 /usr/src/fio/parse.c 00:24:15.118 1 8 libtcmalloc_minimal.so 00:24:15.118 1 904 libcrypto.so 00:24:15.118 ----------------------------------------------------- 00:24:15.118 00:24:15.118 11:38:20 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:24:15.118 11:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:15.118 11:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:15.118 11:38:20 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:24:15.118 11:38:20 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:24:15.118 11:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:15.118 11:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:15.118 11:38:20 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:24:15.118 11:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:24:15.118 11:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:15.118 11:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:15.376 11:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:15.376 11:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:15.376 11:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:24:15.376 11:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:15.376 11:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:15.376 11:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:15.376 11:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:15.376 11:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:24:15.376 11:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:15.376 11:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:15.376 11:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:24:15.376 11:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:15.376 11:38:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:24:15.634 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:15.634 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:15.634 fio-3.35 00:24:15.634 Starting 2 threads 00:24:47.704 00:24:47.704 first_half: (groupid=0, jobs=1): err= 0: pid=78016: Wed Nov 20 11:38:48 2024 00:24:47.704 read: IOPS=2561, BW=10.0MiB/s (10.5MB/s)(256MiB/25561msec) 00:24:47.704 slat (nsec): min=3619, max=44496, avg=6816.64, stdev=2098.86 00:24:47.704 clat (usec): min=633, max=321130, avg=41839.66, stdev=27962.14 00:24:47.704 lat (usec): min=637, max=321135, avg=41846.47, stdev=27962.44 00:24:47.704 clat percentiles (msec): 00:24:47.704 | 1.00th=[ 9], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:24:47.704 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 36], 00:24:47.704 | 70.00th=[ 39], 80.00th=[ 41], 90.00th=[ 47], 95.00th=[ 85], 00:24:47.704 | 99.00th=[ 186], 99.50th=[ 205], 99.90th=[ 255], 99.95th=[ 271], 00:24:47.704 | 99.99th=[ 313] 00:24:47.704 write: IOPS=2567, BW=10.0MiB/s (10.5MB/s)(256MiB/25527msec); 0 zone resets 00:24:47.704 slat (usec): min=4, max=637, avg= 8.19, stdev= 5.72 00:24:47.704 clat (usec): min=390, max=53467, avg=8089.76, stdev=7623.59 00:24:47.704 lat (usec): min=422, max=53474, avg=8097.95, stdev=7623.76 00:24:47.704 clat percentiles (usec): 00:24:47.704 | 1.00th=[ 971], 5.00th=[ 1369], 10.00th=[ 1745], 20.00th=[ 3097], 00:24:47.704 | 30.00th=[ 4359], 40.00th=[ 5538], 50.00th=[ 6456], 60.00th=[ 7177], 00:24:47.704 | 70.00th=[ 8225], 80.00th=[ 9765], 90.00th=[15008], 95.00th=[23987], 00:24:47.704 | 99.00th=[41157], 99.50th=[44303], 99.90th=[50070], 99.95th=[51119], 00:24:47.704 | 99.99th=[52167] 00:24:47.704 bw ( KiB/s): min= 24, max=41912, per=100.00%, avg=20833.28, stdev=12220.64, samples=25 00:24:47.704 iops : min= 6, max=10478, avg=5208.40, stdev=3055.11, samples=25 00:24:47.704 lat (usec) : 500=0.02%, 750=0.15%, 1000=0.43% 00:24:47.704 lat (msec) : 2=5.77%, 4=6.99%, 10=28.23%, 20=6.72%, 50=47.93% 00:24:47.704 lat (msec) : 100=1.60%, 250=2.12%, 500=0.05% 00:24:47.704 cpu : usr=99.19%, sys=0.16%, ctx=36, majf=0, minf=5561 00:24:47.704 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:47.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.704 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:47.704 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.704 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:47.704 second_half: (groupid=0, jobs=1): err= 0: pid=78017: Wed Nov 20 11:38:48 2024 00:24:47.704 read: IOPS=2581, BW=10.1MiB/s (10.6MB/s)(256MiB/25373msec) 00:24:47.704 slat (nsec): min=3807, max=32794, avg=6860.94, stdev=1976.67 00:24:47.704 clat (msec): min=10, max=242, avg=42.23, stdev=25.45 00:24:47.704 lat (msec): min=10, max=242, avg=42.24, stdev=25.45 00:24:47.704 clat percentiles (msec): 00:24:47.704 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:24:47.704 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 37], 00:24:47.704 | 70.00th=[ 40], 80.00th=[ 41], 90.00th=[ 48], 95.00th=[ 80], 00:24:47.704 | 99.00th=[ 180], 99.50th=[ 207], 99.90th=[ 232], 99.95th=[ 236], 00:24:47.704 | 99.99th=[ 243] 00:24:47.704 write: IOPS=2596, BW=10.1MiB/s (10.6MB/s)(256MiB/25240msec); 0 zone resets 00:24:47.704 slat (usec): min=4, max=265, avg= 8.15, stdev= 4.46 00:24:47.704 clat (usec): min=445, max=40486, avg=7332.04, stdev=4815.33 00:24:47.704 lat (usec): min=453, max=40493, avg=7340.19, stdev=4815.65 00:24:47.704 clat percentiles (usec): 00:24:47.704 | 1.00th=[ 1385], 5.00th=[ 2089], 10.00th=[ 2868], 20.00th=[ 3818], 00:24:47.704 | 30.00th=[ 4817], 40.00th=[ 5538], 50.00th=[ 6390], 60.00th=[ 6980], 00:24:47.704 | 70.00th=[ 7832], 80.00th=[ 8848], 90.00th=[14484], 95.00th=[16057], 00:24:47.704 | 99.00th=[26608], 99.50th=[31589], 99.90th=[36963], 99.95th=[38536], 00:24:47.704 | 99.99th=[39060] 00:24:47.704 bw ( KiB/s): min= 1136, max=47576, per=100.00%, avg=27399.16, stdev=13344.78, samples=19 00:24:47.704 iops : min= 284, max=11894, avg=6849.79, stdev=3336.19, samples=19 00:24:47.705 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.11% 00:24:47.705 lat (msec) : 2=2.05%, 4=8.96%, 10=30.19%, 20=7.53%, 50=47.18% 00:24:47.705 lat (msec) : 100=1.96%, 250=1.95% 00:24:47.705 cpu : usr=99.18%, sys=0.21%, ctx=39, majf=0, minf=5552 00:24:47.705 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:47.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:47.705 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:47.705 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:47.705 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:47.705 00:24:47.705 Run status group 0 (all jobs): 00:24:47.705 READ: bw=20.0MiB/s (21.0MB/s), 10.0MiB/s-10.1MiB/s (10.5MB/s-10.6MB/s), io=512MiB (536MB), run=25373-25561msec 00:24:47.705 WRITE: bw=20.1MiB/s (21.0MB/s), 10.0MiB/s-10.1MiB/s (10.5MB/s-10.6MB/s), io=512MiB (537MB), run=25240-25527msec 00:24:47.705 ----------------------------------------------------- 00:24:47.705 Suppressions used: 00:24:47.705 count bytes template 00:24:47.705 2 10 /usr/src/fio/parse.c 00:24:47.705 2 192 /usr/src/fio/iolog.c 00:24:47.705 1 8 libtcmalloc_minimal.so 00:24:47.705 1 904 libcrypto.so 00:24:47.705 ----------------------------------------------------- 00:24:47.705 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:47.705 11:38:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:24:47.705 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:24:47.705 fio-3.35 00:24:47.705 Starting 1 thread 00:25:02.586 00:25:02.586 test: (groupid=0, jobs=1): err= 0: pid=78348: Wed Nov 20 11:39:06 2024 00:25:02.586 read: IOPS=7238, BW=28.3MiB/s (29.6MB/s)(255MiB/9008msec) 00:25:02.586 slat (nsec): min=3551, max=32314, avg=5709.52, stdev=1782.90 00:25:02.586 clat (usec): min=706, max=34801, avg=17673.56, stdev=1267.47 00:25:02.586 lat (usec): min=710, max=34806, avg=17679.27, stdev=1267.48 00:25:02.586 clat percentiles (usec): 00:25:02.586 | 1.00th=[16581], 5.00th=[16712], 10.00th=[16909], 20.00th=[17171], 00:25:02.586 | 30.00th=[17171], 40.00th=[17171], 50.00th=[17433], 60.00th=[17433], 00:25:02.586 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18482], 95.00th=[20579], 00:25:02.586 | 99.00th=[21627], 99.50th=[23200], 99.90th=[28181], 99.95th=[30540], 00:25:02.586 | 99.99th=[34341] 00:25:02.586 write: IOPS=12.8k, BW=49.9MiB/s (52.3MB/s)(256MiB/5132msec); 0 zone resets 00:25:02.586 slat (usec): min=4, max=551, avg= 8.56, stdev= 6.54 00:25:02.586 clat (usec): min=606, max=55131, avg=9968.55, stdev=12417.34 00:25:02.586 lat (usec): min=614, max=55140, avg=9977.12, stdev=12417.34 00:25:02.586 clat percentiles (usec): 00:25:02.586 | 1.00th=[ 898], 5.00th=[ 1057], 10.00th=[ 1172], 20.00th=[ 1336], 00:25:02.586 | 30.00th=[ 1500], 40.00th=[ 1942], 50.00th=[ 6587], 60.00th=[ 7504], 00:25:02.586 | 70.00th=[ 8586], 80.00th=[10945], 90.00th=[35914], 95.00th=[37487], 00:25:02.586 | 99.00th=[44303], 99.50th=[45351], 99.90th=[49546], 99.95th=[51643], 00:25:02.586 | 99.99th=[53740] 00:25:02.586 bw ( KiB/s): min=11224, max=69096, per=93.31%, avg=47662.55, stdev=14834.56, samples=11 00:25:02.586 iops : min= 2806, max=17274, avg=11915.64, stdev=3708.64, samples=11 00:25:02.586 lat (usec) : 750=0.07%, 1000=1.38% 00:25:02.586 lat (msec) : 2=18.76%, 4=0.83%, 10=17.57%, 20=49.84%, 50=11.50% 00:25:02.586 lat (msec) : 100=0.04% 00:25:02.586 cpu : usr=98.71%, sys=0.52%, ctx=110, majf=0, minf=5565 00:25:02.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:02.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:02.586 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:02.586 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:02.586 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:02.586 00:25:02.586 Run status group 0 (all jobs): 00:25:02.586 READ: bw=28.3MiB/s (29.6MB/s), 28.3MiB/s-28.3MiB/s (29.6MB/s-29.6MB/s), io=255MiB (267MB), run=9008-9008msec 00:25:02.586 WRITE: bw=49.9MiB/s (52.3MB/s), 49.9MiB/s-49.9MiB/s (52.3MB/s-52.3MB/s), io=256MiB (268MB), run=5132-5132msec 00:25:03.542 ----------------------------------------------------- 00:25:03.542 Suppressions used: 00:25:03.542 count bytes template 00:25:03.542 1 5 /usr/src/fio/parse.c 00:25:03.542 2 192 /usr/src/fio/iolog.c 00:25:03.542 1 8 libtcmalloc_minimal.so 00:25:03.542 1 904 libcrypto.so 00:25:03.542 ----------------------------------------------------- 00:25:03.542 00:25:03.542 11:39:08 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:25:03.542 11:39:08 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:03.542 11:39:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:03.543 11:39:09 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:03.543 Remove shared memory files 00:25:03.543 11:39:09 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:25:03.543 11:39:09 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:03.543 11:39:09 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:25:03.543 11:39:09 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:25:03.543 11:39:09 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58110 /dev/shm/spdk_tgt_trace.pid76563 00:25:03.543 11:39:09 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:03.543 11:39:09 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:25:03.543 ************************************ 00:25:03.543 END TEST ftl_fio_basic 00:25:03.543 ************************************ 00:25:03.543 00:25:03.543 real 1m13.731s 00:25:03.543 user 2m42.137s 00:25:03.543 sys 0m4.352s 00:25:03.543 11:39:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:03.543 11:39:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:03.543 11:39:09 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:25:03.543 11:39:09 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:03.543 11:39:09 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:03.543 11:39:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:03.543 ************************************ 00:25:03.543 START TEST ftl_bdevperf 00:25:03.543 ************************************ 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:25:03.543 * Looking for test storage... 00:25:03.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:03.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.543 --rc genhtml_branch_coverage=1 00:25:03.543 --rc genhtml_function_coverage=1 00:25:03.543 --rc genhtml_legend=1 00:25:03.543 --rc geninfo_all_blocks=1 00:25:03.543 --rc geninfo_unexecuted_blocks=1 00:25:03.543 00:25:03.543 ' 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:03.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.543 --rc genhtml_branch_coverage=1 00:25:03.543 --rc genhtml_function_coverage=1 00:25:03.543 --rc genhtml_legend=1 00:25:03.543 --rc geninfo_all_blocks=1 00:25:03.543 --rc geninfo_unexecuted_blocks=1 00:25:03.543 00:25:03.543 ' 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:03.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.543 --rc genhtml_branch_coverage=1 00:25:03.543 --rc genhtml_function_coverage=1 00:25:03.543 --rc genhtml_legend=1 00:25:03.543 --rc geninfo_all_blocks=1 00:25:03.543 --rc geninfo_unexecuted_blocks=1 00:25:03.543 00:25:03.543 ' 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:03.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.543 --rc genhtml_branch_coverage=1 00:25:03.543 --rc genhtml_function_coverage=1 00:25:03.543 --rc genhtml_legend=1 00:25:03.543 --rc geninfo_all_blocks=1 00:25:03.543 --rc geninfo_unexecuted_blocks=1 00:25:03.543 00:25:03.543 ' 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:25:03.543 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78592 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78592 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78592 ']' 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:03.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:03.802 11:39:09 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:03.802 [2024-11-20 11:39:09.439196] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:25:03.802 [2024-11-20 11:39:09.439595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78592 ] 00:25:04.061 [2024-11-20 11:39:09.642461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.061 [2024-11-20 11:39:09.773274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.998 11:39:10 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:04.998 11:39:10 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:04.998 11:39:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:04.998 11:39:10 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:25:04.998 11:39:10 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:04.998 11:39:10 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:25:04.998 11:39:10 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:25:04.998 11:39:10 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:05.257 11:39:10 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:05.257 11:39:10 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:25:05.257 11:39:10 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:05.257 11:39:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:05.257 11:39:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:05.257 11:39:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:25:05.257 11:39:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:25:05.257 11:39:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:05.515 11:39:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:05.515 { 00:25:05.515 "name": "nvme0n1", 00:25:05.515 "aliases": [ 00:25:05.515 "efaf120e-d1fa-49e4-b791-1bee8da58eb4" 00:25:05.515 ], 00:25:05.515 "product_name": "NVMe disk", 00:25:05.515 "block_size": 4096, 00:25:05.515 "num_blocks": 1310720, 00:25:05.515 "uuid": "efaf120e-d1fa-49e4-b791-1bee8da58eb4", 00:25:05.515 "numa_id": -1, 00:25:05.515 "assigned_rate_limits": { 00:25:05.515 "rw_ios_per_sec": 0, 00:25:05.515 "rw_mbytes_per_sec": 0, 00:25:05.515 "r_mbytes_per_sec": 0, 00:25:05.515 "w_mbytes_per_sec": 0 00:25:05.515 }, 00:25:05.515 "claimed": true, 00:25:05.515 "claim_type": "read_many_write_one", 00:25:05.515 "zoned": false, 00:25:05.515 "supported_io_types": { 00:25:05.515 "read": true, 00:25:05.515 "write": true, 00:25:05.515 "unmap": true, 00:25:05.515 "flush": true, 00:25:05.515 "reset": true, 00:25:05.515 "nvme_admin": true, 00:25:05.515 "nvme_io": true, 00:25:05.515 "nvme_io_md": false, 00:25:05.515 "write_zeroes": true, 00:25:05.515 "zcopy": false, 00:25:05.515 "get_zone_info": false, 00:25:05.515 "zone_management": false, 00:25:05.515 "zone_append": false, 00:25:05.515 "compare": true, 00:25:05.515 "compare_and_write": false, 00:25:05.515 "abort": true, 00:25:05.515 "seek_hole": false, 00:25:05.515 "seek_data": false, 00:25:05.515 "copy": true, 00:25:05.515 "nvme_iov_md": false 00:25:05.515 }, 00:25:05.515 "driver_specific": { 00:25:05.515 "nvme": [ 00:25:05.515 { 00:25:05.515 "pci_address": "0000:00:11.0", 00:25:05.515 "trid": { 00:25:05.515 "trtype": "PCIe", 00:25:05.515 "traddr": "0000:00:11.0" 00:25:05.515 }, 00:25:05.515 "ctrlr_data": { 00:25:05.515 "cntlid": 0, 00:25:05.515 "vendor_id": "0x1b36", 00:25:05.515 "model_number": "QEMU NVMe Ctrl", 00:25:05.515 "serial_number": "12341", 00:25:05.515 "firmware_revision": "8.0.0", 00:25:05.515 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:05.515 "oacs": { 00:25:05.515 "security": 0, 00:25:05.515 "format": 1, 00:25:05.515 "firmware": 0, 00:25:05.515 "ns_manage": 1 00:25:05.515 }, 00:25:05.515 "multi_ctrlr": false, 00:25:05.515 "ana_reporting": false 00:25:05.515 }, 00:25:05.515 "vs": { 00:25:05.515 "nvme_version": "1.4" 00:25:05.515 }, 00:25:05.515 "ns_data": { 00:25:05.515 "id": 1, 00:25:05.516 "can_share": false 00:25:05.516 } 00:25:05.516 } 00:25:05.516 ], 00:25:05.516 "mp_policy": "active_passive" 00:25:05.516 } 00:25:05.516 } 00:25:05.516 ]' 00:25:05.516 11:39:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:05.516 11:39:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:25:05.516 11:39:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:05.516 11:39:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:05.516 11:39:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:05.516 11:39:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:25:05.516 11:39:11 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:25:05.516 11:39:11 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:05.516 11:39:11 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:25:05.516 11:39:11 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:05.516 11:39:11 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:05.775 11:39:11 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=abac590a-4cc8-4160-901c-e220f518cf7e 00:25:05.775 11:39:11 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:25:05.775 11:39:11 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u abac590a-4cc8-4160-901c-e220f518cf7e 00:25:06.033 11:39:11 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:06.291 11:39:11 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=4197aa6f-774a-4ba6-9436-79ebaf9349ec 00:25:06.291 11:39:11 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4197aa6f-774a-4ba6-9436-79ebaf9349ec 00:25:06.550 11:39:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=b8848ee1-b6b9-4994-a7e2-8cbd57005a59 00:25:06.550 11:39:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b8848ee1-b6b9-4994-a7e2-8cbd57005a59 00:25:06.550 11:39:12 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:25:06.550 11:39:12 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:06.550 11:39:12 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=b8848ee1-b6b9-4994-a7e2-8cbd57005a59 00:25:06.550 11:39:12 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:25:06.550 11:39:12 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size b8848ee1-b6b9-4994-a7e2-8cbd57005a59 00:25:06.550 11:39:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=b8848ee1-b6b9-4994-a7e2-8cbd57005a59 00:25:06.550 11:39:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:06.550 11:39:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:25:06.550 11:39:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:25:06.550 11:39:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b8848ee1-b6b9-4994-a7e2-8cbd57005a59 00:25:06.808 11:39:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:06.808 { 00:25:06.808 "name": "b8848ee1-b6b9-4994-a7e2-8cbd57005a59", 00:25:06.808 "aliases": [ 00:25:06.808 "lvs/nvme0n1p0" 00:25:06.808 ], 00:25:06.808 "product_name": "Logical Volume", 00:25:06.808 "block_size": 4096, 00:25:06.808 "num_blocks": 26476544, 00:25:06.808 "uuid": "b8848ee1-b6b9-4994-a7e2-8cbd57005a59", 00:25:06.808 "assigned_rate_limits": { 00:25:06.808 "rw_ios_per_sec": 0, 00:25:06.808 "rw_mbytes_per_sec": 0, 00:25:06.808 "r_mbytes_per_sec": 0, 00:25:06.808 "w_mbytes_per_sec": 0 00:25:06.808 }, 00:25:06.808 "claimed": false, 00:25:06.808 "zoned": false, 00:25:06.808 "supported_io_types": { 00:25:06.808 "read": true, 00:25:06.808 "write": true, 00:25:06.808 "unmap": true, 00:25:06.808 "flush": false, 00:25:06.808 "reset": true, 00:25:06.808 "nvme_admin": false, 00:25:06.808 "nvme_io": false, 00:25:06.808 "nvme_io_md": false, 00:25:06.808 "write_zeroes": true, 00:25:06.808 "zcopy": false, 00:25:06.808 "get_zone_info": false, 00:25:06.808 "zone_management": false, 00:25:06.808 "zone_append": false, 00:25:06.808 "compare": false, 00:25:06.808 "compare_and_write": false, 00:25:06.808 "abort": false, 00:25:06.808 "seek_hole": true, 00:25:06.808 "seek_data": true, 00:25:06.808 "copy": false, 00:25:06.808 "nvme_iov_md": false 00:25:06.808 }, 00:25:06.808 "driver_specific": { 00:25:06.808 "lvol": { 00:25:06.808 "lvol_store_uuid": "4197aa6f-774a-4ba6-9436-79ebaf9349ec", 00:25:06.808 "base_bdev": "nvme0n1", 00:25:06.808 "thin_provision": true, 00:25:06.808 "num_allocated_clusters": 0, 00:25:06.808 "snapshot": false, 00:25:06.808 "clone": false, 00:25:06.808 "esnap_clone": false 00:25:06.808 } 00:25:06.808 } 00:25:06.808 } 00:25:06.808 ]' 00:25:06.808 11:39:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:06.809 11:39:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:25:06.809 11:39:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:06.809 11:39:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:06.809 11:39:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:06.809 11:39:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:25:06.809 11:39:12 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:25:06.809 11:39:12 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:25:06.809 11:39:12 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:07.066 11:39:12 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:07.066 11:39:12 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:07.066 11:39:12 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size b8848ee1-b6b9-4994-a7e2-8cbd57005a59 00:25:07.066 11:39:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=b8848ee1-b6b9-4994-a7e2-8cbd57005a59 00:25:07.066 11:39:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:07.066 11:39:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:25:07.066 11:39:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:25:07.066 11:39:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b8848ee1-b6b9-4994-a7e2-8cbd57005a59 00:25:07.634 11:39:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:07.634 { 00:25:07.634 "name": "b8848ee1-b6b9-4994-a7e2-8cbd57005a59", 00:25:07.634 "aliases": [ 00:25:07.634 "lvs/nvme0n1p0" 00:25:07.634 ], 00:25:07.634 "product_name": "Logical Volume", 00:25:07.634 "block_size": 4096, 00:25:07.634 "num_blocks": 26476544, 00:25:07.634 "uuid": "b8848ee1-b6b9-4994-a7e2-8cbd57005a59", 00:25:07.634 "assigned_rate_limits": { 00:25:07.634 "rw_ios_per_sec": 0, 00:25:07.634 "rw_mbytes_per_sec": 0, 00:25:07.634 "r_mbytes_per_sec": 0, 00:25:07.634 "w_mbytes_per_sec": 0 00:25:07.634 }, 00:25:07.634 "claimed": false, 00:25:07.634 "zoned": false, 00:25:07.634 "supported_io_types": { 00:25:07.634 "read": true, 00:25:07.634 "write": true, 00:25:07.634 "unmap": true, 00:25:07.634 "flush": false, 00:25:07.634 "reset": true, 00:25:07.634 "nvme_admin": false, 00:25:07.634 "nvme_io": false, 00:25:07.634 "nvme_io_md": false, 00:25:07.634 "write_zeroes": true, 00:25:07.634 "zcopy": false, 00:25:07.634 "get_zone_info": false, 00:25:07.634 "zone_management": false, 00:25:07.634 "zone_append": false, 00:25:07.634 "compare": false, 00:25:07.634 "compare_and_write": false, 00:25:07.634 "abort": false, 00:25:07.634 "seek_hole": true, 00:25:07.634 "seek_data": true, 00:25:07.634 "copy": false, 00:25:07.634 "nvme_iov_md": false 00:25:07.634 }, 00:25:07.634 "driver_specific": { 00:25:07.634 "lvol": { 00:25:07.634 "lvol_store_uuid": "4197aa6f-774a-4ba6-9436-79ebaf9349ec", 00:25:07.634 "base_bdev": "nvme0n1", 00:25:07.634 "thin_provision": true, 00:25:07.634 "num_allocated_clusters": 0, 00:25:07.634 "snapshot": false, 00:25:07.634 "clone": false, 00:25:07.634 "esnap_clone": false 00:25:07.634 } 00:25:07.634 } 00:25:07.634 } 00:25:07.634 ]' 00:25:07.634 11:39:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:07.634 11:39:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:25:07.634 11:39:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:07.634 11:39:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:07.634 11:39:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:07.634 11:39:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:25:07.634 11:39:13 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:25:07.634 11:39:13 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:07.634 11:39:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:25:07.634 11:39:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size b8848ee1-b6b9-4994-a7e2-8cbd57005a59 00:25:07.634 11:39:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=b8848ee1-b6b9-4994-a7e2-8cbd57005a59 00:25:07.634 11:39:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:07.634 11:39:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:25:07.634 11:39:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:25:07.634 11:39:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b8848ee1-b6b9-4994-a7e2-8cbd57005a59 00:25:08.202 11:39:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:08.202 { 00:25:08.202 "name": "b8848ee1-b6b9-4994-a7e2-8cbd57005a59", 00:25:08.202 "aliases": [ 00:25:08.202 "lvs/nvme0n1p0" 00:25:08.202 ], 00:25:08.202 "product_name": "Logical Volume", 00:25:08.202 "block_size": 4096, 00:25:08.202 "num_blocks": 26476544, 00:25:08.202 "uuid": "b8848ee1-b6b9-4994-a7e2-8cbd57005a59", 00:25:08.202 "assigned_rate_limits": { 00:25:08.202 "rw_ios_per_sec": 0, 00:25:08.202 "rw_mbytes_per_sec": 0, 00:25:08.202 "r_mbytes_per_sec": 0, 00:25:08.202 "w_mbytes_per_sec": 0 00:25:08.202 }, 00:25:08.202 "claimed": false, 00:25:08.202 "zoned": false, 00:25:08.202 "supported_io_types": { 00:25:08.202 "read": true, 00:25:08.202 "write": true, 00:25:08.202 "unmap": true, 00:25:08.202 "flush": false, 00:25:08.202 "reset": true, 00:25:08.202 "nvme_admin": false, 00:25:08.202 "nvme_io": false, 00:25:08.202 "nvme_io_md": false, 00:25:08.202 "write_zeroes": true, 00:25:08.202 "zcopy": false, 00:25:08.202 "get_zone_info": false, 00:25:08.202 "zone_management": false, 00:25:08.202 "zone_append": false, 00:25:08.202 "compare": false, 00:25:08.202 "compare_and_write": false, 00:25:08.202 "abort": false, 00:25:08.202 "seek_hole": true, 00:25:08.202 "seek_data": true, 00:25:08.202 "copy": false, 00:25:08.202 "nvme_iov_md": false 00:25:08.202 }, 00:25:08.202 "driver_specific": { 00:25:08.202 "lvol": { 00:25:08.202 "lvol_store_uuid": "4197aa6f-774a-4ba6-9436-79ebaf9349ec", 00:25:08.202 "base_bdev": "nvme0n1", 00:25:08.202 "thin_provision": true, 00:25:08.202 "num_allocated_clusters": 0, 00:25:08.202 "snapshot": false, 00:25:08.202 "clone": false, 00:25:08.202 "esnap_clone": false 00:25:08.202 } 00:25:08.202 } 00:25:08.202 } 00:25:08.202 ]' 00:25:08.202 11:39:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:08.202 11:39:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:25:08.202 11:39:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:08.202 11:39:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:08.202 11:39:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:08.202 11:39:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:25:08.202 11:39:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:25:08.202 11:39:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b8848ee1-b6b9-4994-a7e2-8cbd57005a59 -c nvc0n1p0 --l2p_dram_limit 20 00:25:08.462 [2024-11-20 11:39:13.977027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.462 [2024-11-20 11:39:13.977242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:08.462 [2024-11-20 11:39:13.977270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:08.462 [2024-11-20 11:39:13.977285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.462 [2024-11-20 11:39:13.977365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.462 [2024-11-20 11:39:13.977385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:08.462 [2024-11-20 11:39:13.977398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:08.462 [2024-11-20 11:39:13.977411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.462 [2024-11-20 11:39:13.977433] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:08.462 [2024-11-20 11:39:13.978500] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:08.462 [2024-11-20 11:39:13.978522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.462 [2024-11-20 11:39:13.978536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:08.462 [2024-11-20 11:39:13.978547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:25:08.462 [2024-11-20 11:39:13.978560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.462 [2024-11-20 11:39:13.978637] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4cda9631-1326-4c09-b3a2-bffbe52923d4 00:25:08.462 [2024-11-20 11:39:13.980024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.462 [2024-11-20 11:39:13.980061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:08.462 [2024-11-20 11:39:13.980077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:08.462 [2024-11-20 11:39:13.980091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.462 [2024-11-20 11:39:13.987635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.462 [2024-11-20 11:39:13.987768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:08.462 [2024-11-20 11:39:13.987916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.492 ms 00:25:08.462 [2024-11-20 11:39:13.987956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.462 [2024-11-20 11:39:13.988136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.462 [2024-11-20 11:39:13.988178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:08.462 [2024-11-20 11:39:13.988234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:25:08.462 [2024-11-20 11:39:13.988384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.462 [2024-11-20 11:39:13.988449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.462 [2024-11-20 11:39:13.988462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:08.462 [2024-11-20 11:39:13.988494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:08.462 [2024-11-20 11:39:13.988505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.462 [2024-11-20 11:39:13.988534] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:08.462 [2024-11-20 11:39:13.994283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.462 [2024-11-20 11:39:13.994318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:08.462 [2024-11-20 11:39:13.994331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.761 ms 00:25:08.462 [2024-11-20 11:39:13.994345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.462 [2024-11-20 11:39:13.994379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.462 [2024-11-20 11:39:13.994393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:08.462 [2024-11-20 11:39:13.994403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:08.462 [2024-11-20 11:39:13.994416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.462 [2024-11-20 11:39:13.994467] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:08.462 [2024-11-20 11:39:13.994627] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:08.462 [2024-11-20 11:39:13.994641] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:08.462 [2024-11-20 11:39:13.994671] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:08.462 [2024-11-20 11:39:13.994686] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:08.462 [2024-11-20 11:39:13.994701] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:08.462 [2024-11-20 11:39:13.994713] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:08.463 [2024-11-20 11:39:13.994725] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:08.463 [2024-11-20 11:39:13.994735] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:08.463 [2024-11-20 11:39:13.994748] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:08.463 [2024-11-20 11:39:13.994758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.463 [2024-11-20 11:39:13.994774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:08.463 [2024-11-20 11:39:13.994785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:25:08.463 [2024-11-20 11:39:13.994798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.463 [2024-11-20 11:39:13.994869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.463 [2024-11-20 11:39:13.994885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:08.463 [2024-11-20 11:39:13.994895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:08.463 [2024-11-20 11:39:13.994910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.463 [2024-11-20 11:39:13.994989] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:08.463 [2024-11-20 11:39:13.995004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:08.463 [2024-11-20 11:39:13.995017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:08.463 [2024-11-20 11:39:13.995030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.463 [2024-11-20 11:39:13.995040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:08.463 [2024-11-20 11:39:13.995052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:08.463 [2024-11-20 11:39:13.995061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:08.463 [2024-11-20 11:39:13.995073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:08.463 [2024-11-20 11:39:13.995083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:08.463 [2024-11-20 11:39:13.995095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:08.463 [2024-11-20 11:39:13.995104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:08.463 [2024-11-20 11:39:13.995116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:08.463 [2024-11-20 11:39:13.995125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:08.463 [2024-11-20 11:39:13.995147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:08.463 [2024-11-20 11:39:13.995157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:08.463 [2024-11-20 11:39:13.995172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.463 [2024-11-20 11:39:13.995182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:08.463 [2024-11-20 11:39:13.995194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:08.463 [2024-11-20 11:39:13.995203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.463 [2024-11-20 11:39:13.995217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:08.463 [2024-11-20 11:39:13.995226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:08.463 [2024-11-20 11:39:13.995238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.463 [2024-11-20 11:39:13.995248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:08.463 [2024-11-20 11:39:13.995260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:08.463 [2024-11-20 11:39:13.995269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.463 [2024-11-20 11:39:13.995281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:08.463 [2024-11-20 11:39:13.995290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:08.463 [2024-11-20 11:39:13.995302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.463 [2024-11-20 11:39:13.995311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:08.463 [2024-11-20 11:39:13.995323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:08.463 [2024-11-20 11:39:13.995332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.463 [2024-11-20 11:39:13.995346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:08.463 [2024-11-20 11:39:13.995355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:08.463 [2024-11-20 11:39:13.995366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:08.463 [2024-11-20 11:39:13.995376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:08.463 [2024-11-20 11:39:13.995388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:08.463 [2024-11-20 11:39:13.995397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:08.463 [2024-11-20 11:39:13.995408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:08.463 [2024-11-20 11:39:13.995417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:08.463 [2024-11-20 11:39:13.995428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.463 [2024-11-20 11:39:13.995438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:08.463 [2024-11-20 11:39:13.995449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:08.463 [2024-11-20 11:39:13.995458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.463 [2024-11-20 11:39:13.995480] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:08.463 [2024-11-20 11:39:13.995491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:08.463 [2024-11-20 11:39:13.995504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:08.463 [2024-11-20 11:39:13.995514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.463 [2024-11-20 11:39:13.995530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:08.463 [2024-11-20 11:39:13.995541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:08.463 [2024-11-20 11:39:13.995553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:08.463 [2024-11-20 11:39:13.995563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:08.463 [2024-11-20 11:39:13.995574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:08.463 [2024-11-20 11:39:13.995584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:08.463 [2024-11-20 11:39:13.995599] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:08.463 [2024-11-20 11:39:13.995612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:08.463 [2024-11-20 11:39:13.995626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:08.463 [2024-11-20 11:39:13.995636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:08.463 [2024-11-20 11:39:13.995649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:08.463 [2024-11-20 11:39:13.995659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:08.463 [2024-11-20 11:39:13.995672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:08.463 [2024-11-20 11:39:13.995682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:08.463 [2024-11-20 11:39:13.995695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:08.463 [2024-11-20 11:39:13.995705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:08.463 [2024-11-20 11:39:13.995720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:08.463 [2024-11-20 11:39:13.995731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:08.463 [2024-11-20 11:39:13.995743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:08.463 [2024-11-20 11:39:13.995753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:08.463 [2024-11-20 11:39:13.995766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:08.463 [2024-11-20 11:39:13.995777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:08.463 [2024-11-20 11:39:13.995790] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:08.463 [2024-11-20 11:39:13.995801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:08.463 [2024-11-20 11:39:13.995816] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:08.463 [2024-11-20 11:39:13.995827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:08.463 [2024-11-20 11:39:13.995839] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:08.463 [2024-11-20 11:39:13.995850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:08.463 [2024-11-20 11:39:13.995863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.463 [2024-11-20 11:39:13.995875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:08.463 [2024-11-20 11:39:13.995888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:25:08.463 [2024-11-20 11:39:13.995898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.463 [2024-11-20 11:39:13.995938] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:08.463 [2024-11-20 11:39:13.995952] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:11.046 [2024-11-20 11:39:16.413875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.046 [2024-11-20 11:39:16.413937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:11.046 [2024-11-20 11:39:16.413963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2417.919 ms 00:25:11.046 [2024-11-20 11:39:16.413975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.046 [2024-11-20 11:39:16.453177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.046 [2024-11-20 11:39:16.453247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:11.046 [2024-11-20 11:39:16.453268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.831 ms 00:25:11.046 [2024-11-20 11:39:16.453280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.046 [2024-11-20 11:39:16.453449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.046 [2024-11-20 11:39:16.453463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:11.046 [2024-11-20 11:39:16.453492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:11.046 [2024-11-20 11:39:16.453504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.046 [2024-11-20 11:39:16.512586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.046 [2024-11-20 11:39:16.512800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:11.046 [2024-11-20 11:39:16.512829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.031 ms 00:25:11.046 [2024-11-20 11:39:16.512841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.046 [2024-11-20 11:39:16.512893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.046 [2024-11-20 11:39:16.512908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:11.046 [2024-11-20 11:39:16.512922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:11.046 [2024-11-20 11:39:16.512932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.046 [2024-11-20 11:39:16.513491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.046 [2024-11-20 11:39:16.513509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:11.046 [2024-11-20 11:39:16.513523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:25:11.046 [2024-11-20 11:39:16.513533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.046 [2024-11-20 11:39:16.513650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.046 [2024-11-20 11:39:16.513663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:11.046 [2024-11-20 11:39:16.513679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:25:11.046 [2024-11-20 11:39:16.513690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.046 [2024-11-20 11:39:16.533372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.046 [2024-11-20 11:39:16.533414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:11.046 [2024-11-20 11:39:16.533432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.654 ms 00:25:11.046 [2024-11-20 11:39:16.533443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.046 [2024-11-20 11:39:16.546419] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:25:11.046 [2024-11-20 11:39:16.552464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.046 [2024-11-20 11:39:16.552513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:11.046 [2024-11-20 11:39:16.552528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.894 ms 00:25:11.046 [2024-11-20 11:39:16.552542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.046 [2024-11-20 11:39:16.625442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.046 [2024-11-20 11:39:16.625527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:11.046 [2024-11-20 11:39:16.625545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.857 ms 00:25:11.046 [2024-11-20 11:39:16.625559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.046 [2024-11-20 11:39:16.625768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.046 [2024-11-20 11:39:16.625790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:11.046 [2024-11-20 11:39:16.625803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:25:11.046 [2024-11-20 11:39:16.625817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.046 [2024-11-20 11:39:16.664271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.046 [2024-11-20 11:39:16.664333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:11.046 [2024-11-20 11:39:16.664351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.371 ms 00:25:11.046 [2024-11-20 11:39:16.664365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.046 [2024-11-20 11:39:16.703081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.046 [2024-11-20 11:39:16.703160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:11.046 [2024-11-20 11:39:16.703195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.670 ms 00:25:11.046 [2024-11-20 11:39:16.703208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.046 [2024-11-20 11:39:16.703923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.046 [2024-11-20 11:39:16.703950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:11.046 [2024-11-20 11:39:16.703963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:25:11.046 [2024-11-20 11:39:16.703977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.306 [2024-11-20 11:39:16.804668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.307 [2024-11-20 11:39:16.804752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:11.307 [2024-11-20 11:39:16.804770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.627 ms 00:25:11.307 [2024-11-20 11:39:16.804784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.307 [2024-11-20 11:39:16.846412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.307 [2024-11-20 11:39:16.846655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:11.307 [2024-11-20 11:39:16.846680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.541 ms 00:25:11.307 [2024-11-20 11:39:16.846698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.307 [2024-11-20 11:39:16.884769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.307 [2024-11-20 11:39:16.884933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:11.307 [2024-11-20 11:39:16.884956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.025 ms 00:25:11.307 [2024-11-20 11:39:16.884970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.307 [2024-11-20 11:39:16.923813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.307 [2024-11-20 11:39:16.923995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:11.307 [2024-11-20 11:39:16.924019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.755 ms 00:25:11.307 [2024-11-20 11:39:16.924032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.307 [2024-11-20 11:39:16.924076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.307 [2024-11-20 11:39:16.924095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:11.307 [2024-11-20 11:39:16.924107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:11.307 [2024-11-20 11:39:16.924120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.307 [2024-11-20 11:39:16.924226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.307 [2024-11-20 11:39:16.924242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:11.307 [2024-11-20 11:39:16.924253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:11.307 [2024-11-20 11:39:16.924266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.307 [2024-11-20 11:39:16.925382] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2947.851 ms, result 0 00:25:11.307 { 00:25:11.307 "name": "ftl0", 00:25:11.307 "uuid": "4cda9631-1326-4c09-b3a2-bffbe52923d4" 00:25:11.307 } 00:25:11.307 11:39:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:25:11.307 11:39:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:25:11.307 11:39:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:25:11.566 11:39:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:25:11.825 [2024-11-20 11:39:17.366023] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:25:11.825 I/O size of 69632 is greater than zero copy threshold (65536). 00:25:11.825 Zero copy mechanism will not be used. 00:25:11.825 Running I/O for 4 seconds... 00:25:13.699 2022.00 IOPS, 134.27 MiB/s [2024-11-20T11:39:20.398Z] 2058.50 IOPS, 136.70 MiB/s [2024-11-20T11:39:21.775Z] 2083.33 IOPS, 138.35 MiB/s [2024-11-20T11:39:21.775Z] 2059.50 IOPS, 136.76 MiB/s 00:25:16.013 Latency(us) 00:25:16.013 [2024-11-20T11:39:21.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.013 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:25:16.013 ftl0 : 4.00 2058.70 136.71 0.00 0.00 510.70 206.75 2215.74 00:25:16.013 [2024-11-20T11:39:21.775Z] =================================================================================================================== 00:25:16.013 [2024-11-20T11:39:21.775Z] Total : 2058.70 136.71 0.00 0.00 510.70 206.75 2215.74 00:25:16.013 [2024-11-20 11:39:21.377943] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:25:16.013 { 00:25:16.013 "results": [ 00:25:16.013 { 00:25:16.013 "job": "ftl0", 00:25:16.013 "core_mask": "0x1", 00:25:16.013 "workload": "randwrite", 00:25:16.013 "status": "finished", 00:25:16.013 "queue_depth": 1, 00:25:16.013 "io_size": 69632, 00:25:16.013 "runtime": 4.002033, 00:25:16.013 "iops": 2058.7036638628415, 00:25:16.013 "mibps": 136.71079017839182, 00:25:16.013 "io_failed": 0, 00:25:16.013 "io_timeout": 0, 00:25:16.013 "avg_latency_us": 510.69711788878675, 00:25:16.013 "min_latency_us": 206.75047619047618, 00:25:16.013 "max_latency_us": 2215.7409523809524 00:25:16.013 } 00:25:16.013 ], 00:25:16.013 "core_count": 1 00:25:16.014 } 00:25:16.014 11:39:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:25:16.014 [2024-11-20 11:39:21.503694] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:25:16.014 Running I/O for 4 seconds... 00:25:17.886 9784.00 IOPS, 38.22 MiB/s [2024-11-20T11:39:24.582Z] 9397.00 IOPS, 36.71 MiB/s [2024-11-20T11:39:25.517Z] 9278.33 IOPS, 36.24 MiB/s [2024-11-20T11:39:25.776Z] 9210.25 IOPS, 35.98 MiB/s 00:25:20.014 Latency(us) 00:25:20.014 [2024-11-20T11:39:25.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.014 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:25:20.014 ftl0 : 4.02 9193.45 35.91 0.00 0.00 13887.65 269.17 32455.92 00:25:20.014 [2024-11-20T11:39:25.776Z] =================================================================================================================== 00:25:20.014 [2024-11-20T11:39:25.776Z] Total : 9193.45 35.91 0.00 0.00 13887.65 0.00 32455.92 00:25:20.014 [2024-11-20 11:39:25.537032] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:25:20.014 "results": [ 00:25:20.014 { 00:25:20.014 "job": "ftl0", 00:25:20.014 "core_mask": "0x1", 00:25:20.014 "workload": "randwrite", 00:25:20.014 "status": "finished", 00:25:20.014 "queue_depth": 128, 00:25:20.014 "io_size": 4096, 00:25:20.014 "runtime": 4.021017, 00:25:20.014 "iops": 9193.445339823234, 00:25:20.014 "mibps": 35.91189585868451, 00:25:20.014 "io_failed": 0, 00:25:20.014 "io_timeout": 0, 00:25:20.014 "avg_latency_us": 13887.648749270584, 00:25:20.014 "min_latency_us": 269.1657142857143, 00:25:20.014 "max_latency_us": 32455.92380952381 00:25:20.014 } 00:25:20.014 ], 00:25:20.014 "core_count": 1 00:25:20.014 } 00:25:20.014 l0 00:25:20.014 11:39:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:25:20.014 [2024-11-20 11:39:25.697237] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:25:20.014 Running I/O for 4 seconds... 00:25:22.328 6580.00 IOPS, 25.70 MiB/s [2024-11-20T11:39:29.025Z] 7077.00 IOPS, 27.64 MiB/s [2024-11-20T11:39:29.982Z] 7260.33 IOPS, 28.36 MiB/s [2024-11-20T11:39:29.982Z] 7409.75 IOPS, 28.94 MiB/s 00:25:24.220 Latency(us) 00:25:24.220 [2024-11-20T11:39:29.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.220 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:24.220 Verification LBA range: start 0x0 length 0x1400000 00:25:24.220 ftl0 : 4.01 7421.51 28.99 0.00 0.00 17190.18 278.92 31706.94 00:25:24.220 [2024-11-20T11:39:29.982Z] =================================================================================================================== 00:25:24.220 [2024-11-20T11:39:29.982Z] Total : 7421.51 28.99 0.00 0.00 17190.18 0.00 31706.94 00:25:24.220 [2024-11-20 11:39:29.731761] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:25:24.220 { 00:25:24.220 "results": [ 00:25:24.220 { 00:25:24.220 "job": "ftl0", 00:25:24.220 "core_mask": "0x1", 00:25:24.220 "workload": "verify", 00:25:24.220 "status": "finished", 00:25:24.220 "verify_range": { 00:25:24.220 "start": 0, 00:25:24.220 "length": 20971520 00:25:24.220 }, 00:25:24.220 "queue_depth": 128, 00:25:24.220 "io_size": 4096, 00:25:24.220 "runtime": 4.010642, 00:25:24.220 "iops": 7421.505085719444, 00:25:24.220 "mibps": 28.990254241091577, 00:25:24.220 "io_failed": 0, 00:25:24.220 "io_timeout": 0, 00:25:24.220 "avg_latency_us": 17190.176041723662, 00:25:24.220 "min_latency_us": 278.91809523809525, 00:25:24.220 "max_latency_us": 31706.94095238095 00:25:24.220 } 00:25:24.220 ], 00:25:24.220 "core_count": 1 00:25:24.220 } 00:25:24.220 11:39:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:25:24.487 [2024-11-20 11:39:30.035698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.487 [2024-11-20 11:39:30.035761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:24.487 [2024-11-20 11:39:30.035783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:24.487 [2024-11-20 11:39:30.035796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.487 [2024-11-20 11:39:30.035822] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:24.487 [2024-11-20 11:39:30.040318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.487 [2024-11-20 11:39:30.040350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:24.487 [2024-11-20 11:39:30.040367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.472 ms 00:25:24.487 [2024-11-20 11:39:30.040378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.487 [2024-11-20 11:39:30.042069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.487 [2024-11-20 11:39:30.042113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:24.487 [2024-11-20 11:39:30.042131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.659 ms 00:25:24.487 [2024-11-20 11:39:30.042142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.487 [2024-11-20 11:39:30.215843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.487 [2024-11-20 11:39:30.215920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:24.487 [2024-11-20 11:39:30.215949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 173.654 ms 00:25:24.487 [2024-11-20 11:39:30.215962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.487 [2024-11-20 11:39:30.221536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.487 [2024-11-20 11:39:30.221738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:24.487 [2024-11-20 11:39:30.221769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.519 ms 00:25:24.487 [2024-11-20 11:39:30.221781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.747 [2024-11-20 11:39:30.261185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.747 [2024-11-20 11:39:30.261409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:24.747 [2024-11-20 11:39:30.261442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.309 ms 00:25:24.747 [2024-11-20 11:39:30.261454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.747 [2024-11-20 11:39:30.285107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.747 [2024-11-20 11:39:30.285172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:24.747 [2024-11-20 11:39:30.285197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.556 ms 00:25:24.747 [2024-11-20 11:39:30.285209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.747 [2024-11-20 11:39:30.285375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.747 [2024-11-20 11:39:30.285390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:24.747 [2024-11-20 11:39:30.285407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:25:24.747 [2024-11-20 11:39:30.285418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.747 [2024-11-20 11:39:30.324050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.747 [2024-11-20 11:39:30.324124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:24.747 [2024-11-20 11:39:30.324145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.604 ms 00:25:24.747 [2024-11-20 11:39:30.324156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.747 [2024-11-20 11:39:30.362300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.747 [2024-11-20 11:39:30.362359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:24.747 [2024-11-20 11:39:30.362378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.081 ms 00:25:24.747 [2024-11-20 11:39:30.362389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.747 [2024-11-20 11:39:30.401341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.747 [2024-11-20 11:39:30.401404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:24.747 [2024-11-20 11:39:30.401425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.893 ms 00:25:24.747 [2024-11-20 11:39:30.401437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.747 [2024-11-20 11:39:30.438949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.747 [2024-11-20 11:39:30.439013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:24.747 [2024-11-20 11:39:30.439037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.349 ms 00:25:24.747 [2024-11-20 11:39:30.439048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.747 [2024-11-20 11:39:30.439104] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:24.747 [2024-11-20 11:39:30.439123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.439996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:24.748 [2024-11-20 11:39:30.440337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:24.749 [2024-11-20 11:39:30.440351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:24.749 [2024-11-20 11:39:30.440363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:24.749 [2024-11-20 11:39:30.440377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:24.749 [2024-11-20 11:39:30.440389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:24.749 [2024-11-20 11:39:30.440406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:24.749 [2024-11-20 11:39:30.440419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:24.749 [2024-11-20 11:39:30.440435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:24.749 [2024-11-20 11:39:30.440447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:24.749 [2024-11-20 11:39:30.440461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:24.749 [2024-11-20 11:39:30.440473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:24.749 [2024-11-20 11:39:30.440488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:24.749 [2024-11-20 11:39:30.440508] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:24.749 [2024-11-20 11:39:30.440533] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4cda9631-1326-4c09-b3a2-bffbe52923d4 00:25:24.749 [2024-11-20 11:39:30.440546] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:24.749 [2024-11-20 11:39:30.440559] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:24.749 [2024-11-20 11:39:30.440574] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:24.749 [2024-11-20 11:39:30.440589] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:24.749 [2024-11-20 11:39:30.440599] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:24.749 [2024-11-20 11:39:30.440614] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:24.749 [2024-11-20 11:39:30.440625] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:24.749 [2024-11-20 11:39:30.440641] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:24.749 [2024-11-20 11:39:30.440651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:24.749 [2024-11-20 11:39:30.440665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.749 [2024-11-20 11:39:30.440677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:24.749 [2024-11-20 11:39:30.440692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.563 ms 00:25:24.749 [2024-11-20 11:39:30.440704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.749 [2024-11-20 11:39:30.461960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.749 [2024-11-20 11:39:30.462025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:24.749 [2024-11-20 11:39:30.462048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.179 ms 00:25:24.749 [2024-11-20 11:39:30.462059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.749 [2024-11-20 11:39:30.462663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.749 [2024-11-20 11:39:30.462702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:24.749 [2024-11-20 11:39:30.462718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:25:24.749 [2024-11-20 11:39:30.462729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.009 [2024-11-20 11:39:30.522860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.009 [2024-11-20 11:39:30.522941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:25.009 [2024-11-20 11:39:30.522972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.009 [2024-11-20 11:39:30.522986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.009 [2024-11-20 11:39:30.523084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.009 [2024-11-20 11:39:30.523101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:25.009 [2024-11-20 11:39:30.523124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.009 [2024-11-20 11:39:30.523142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.009 [2024-11-20 11:39:30.523308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.009 [2024-11-20 11:39:30.523330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:25.009 [2024-11-20 11:39:30.523351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.009 [2024-11-20 11:39:30.523368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.009 [2024-11-20 11:39:30.523402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.009 [2024-11-20 11:39:30.523421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:25.009 [2024-11-20 11:39:30.523441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.009 [2024-11-20 11:39:30.523459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.009 [2024-11-20 11:39:30.649470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.009 [2024-11-20 11:39:30.649756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:25.009 [2024-11-20 11:39:30.649802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.009 [2024-11-20 11:39:30.649819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.009 [2024-11-20 11:39:30.756899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.009 [2024-11-20 11:39:30.756969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:25.009 [2024-11-20 11:39:30.756988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.009 [2024-11-20 11:39:30.757000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.009 [2024-11-20 11:39:30.757128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.009 [2024-11-20 11:39:30.757153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:25.009 [2024-11-20 11:39:30.757171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.009 [2024-11-20 11:39:30.757181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.009 [2024-11-20 11:39:30.757245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.009 [2024-11-20 11:39:30.757257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:25.009 [2024-11-20 11:39:30.757271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.009 [2024-11-20 11:39:30.757281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.009 [2024-11-20 11:39:30.757401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.009 [2024-11-20 11:39:30.757415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:25.009 [2024-11-20 11:39:30.757435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.009 [2024-11-20 11:39:30.757445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.009 [2024-11-20 11:39:30.757513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.009 [2024-11-20 11:39:30.757527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:25.009 [2024-11-20 11:39:30.757541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.009 [2024-11-20 11:39:30.757551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.009 [2024-11-20 11:39:30.757594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.009 [2024-11-20 11:39:30.757605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:25.009 [2024-11-20 11:39:30.757618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.009 [2024-11-20 11:39:30.757631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.009 [2024-11-20 11:39:30.757679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.009 [2024-11-20 11:39:30.757701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:25.009 [2024-11-20 11:39:30.757714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.009 [2024-11-20 11:39:30.757724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.009 [2024-11-20 11:39:30.757852] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 722.109 ms, result 0 00:25:25.009 true 00:25:25.271 11:39:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78592 00:25:25.271 11:39:30 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78592 ']' 00:25:25.271 11:39:30 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78592 00:25:25.271 11:39:30 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:25:25.271 11:39:30 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:25.271 11:39:30 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78592 00:25:25.271 11:39:30 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:25.271 killing process with pid 78592 00:25:25.271 Received shutdown signal, test time was about 4.000000 seconds 00:25:25.271 00:25:25.271 Latency(us) 00:25:25.271 [2024-11-20T11:39:31.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.271 [2024-11-20T11:39:31.033Z] =================================================================================================================== 00:25:25.271 [2024-11-20T11:39:31.033Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:25.271 11:39:30 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:25.271 11:39:30 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78592' 00:25:25.271 11:39:30 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78592 00:25:25.271 11:39:30 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78592 00:25:29.526 Remove shared memory files 00:25:29.526 11:39:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:29.526 11:39:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:25:29.526 11:39:34 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:29.526 11:39:34 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:25:29.526 11:39:34 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:25:29.526 11:39:34 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:25:29.526 11:39:34 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:29.526 11:39:34 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:25:29.526 ************************************ 00:25:29.526 END TEST ftl_bdevperf 00:25:29.526 ************************************ 00:25:29.526 00:25:29.526 real 0m25.396s 00:25:29.526 user 0m28.714s 00:25:29.526 sys 0m1.405s 00:25:29.526 11:39:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:29.526 11:39:34 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:29.526 11:39:34 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:25:29.526 11:39:34 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:29.526 11:39:34 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:29.526 11:39:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:29.526 ************************************ 00:25:29.526 START TEST ftl_trim 00:25:29.526 ************************************ 00:25:29.526 11:39:34 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:25:29.526 * Looking for test storage... 00:25:29.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:29.526 11:39:34 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:29.526 11:39:34 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:25:29.526 11:39:34 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:29.526 11:39:34 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:29.526 11:39:34 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:25:29.526 11:39:34 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:29.526 11:39:34 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:29.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.526 --rc genhtml_branch_coverage=1 00:25:29.526 --rc genhtml_function_coverage=1 00:25:29.526 --rc genhtml_legend=1 00:25:29.526 --rc geninfo_all_blocks=1 00:25:29.526 --rc geninfo_unexecuted_blocks=1 00:25:29.526 00:25:29.526 ' 00:25:29.526 11:39:34 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:29.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.526 --rc genhtml_branch_coverage=1 00:25:29.526 --rc genhtml_function_coverage=1 00:25:29.526 --rc genhtml_legend=1 00:25:29.526 --rc geninfo_all_blocks=1 00:25:29.526 --rc geninfo_unexecuted_blocks=1 00:25:29.526 00:25:29.526 ' 00:25:29.526 11:39:34 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:29.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.526 --rc genhtml_branch_coverage=1 00:25:29.526 --rc genhtml_function_coverage=1 00:25:29.526 --rc genhtml_legend=1 00:25:29.526 --rc geninfo_all_blocks=1 00:25:29.526 --rc geninfo_unexecuted_blocks=1 00:25:29.526 00:25:29.526 ' 00:25:29.526 11:39:34 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:29.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.526 --rc genhtml_branch_coverage=1 00:25:29.526 --rc genhtml_function_coverage=1 00:25:29.526 --rc genhtml_legend=1 00:25:29.526 --rc geninfo_all_blocks=1 00:25:29.526 --rc geninfo_unexecuted_blocks=1 00:25:29.526 00:25:29.526 ' 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78945 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78945 00:25:29.526 11:39:34 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:25:29.526 11:39:34 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78945 ']' 00:25:29.527 11:39:34 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.527 11:39:34 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:29.527 11:39:34 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.527 11:39:34 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:29.527 11:39:34 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:29.527 [2024-11-20 11:39:34.933070] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:25:29.527 [2024-11-20 11:39:34.933529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78945 ] 00:25:29.527 [2024-11-20 11:39:35.134006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:29.784 [2024-11-20 11:39:35.308422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.784 [2024-11-20 11:39:35.308526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:29.784 [2024-11-20 11:39:35.308490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.720 11:39:36 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:30.720 11:39:36 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:30.720 11:39:36 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:30.720 11:39:36 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:25:30.720 11:39:36 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:30.720 11:39:36 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:25:30.720 11:39:36 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:25:30.720 11:39:36 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:30.979 11:39:36 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:30.979 11:39:36 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:25:30.979 11:39:36 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:30.979 11:39:36 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:30.979 11:39:36 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:30.979 11:39:36 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:30.979 11:39:36 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:30.979 11:39:36 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:31.239 11:39:36 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:31.239 { 00:25:31.239 "name": "nvme0n1", 00:25:31.239 "aliases": [ 00:25:31.239 "87d6be64-a8d5-47a3-8c9f-a98a45bedb06" 00:25:31.239 ], 00:25:31.239 "product_name": "NVMe disk", 00:25:31.239 "block_size": 4096, 00:25:31.239 "num_blocks": 1310720, 00:25:31.239 "uuid": "87d6be64-a8d5-47a3-8c9f-a98a45bedb06", 00:25:31.239 "numa_id": -1, 00:25:31.239 "assigned_rate_limits": { 00:25:31.239 "rw_ios_per_sec": 0, 00:25:31.239 "rw_mbytes_per_sec": 0, 00:25:31.239 "r_mbytes_per_sec": 0, 00:25:31.239 "w_mbytes_per_sec": 0 00:25:31.239 }, 00:25:31.239 "claimed": true, 00:25:31.239 "claim_type": "read_many_write_one", 00:25:31.239 "zoned": false, 00:25:31.239 "supported_io_types": { 00:25:31.239 "read": true, 00:25:31.239 "write": true, 00:25:31.239 "unmap": true, 00:25:31.239 "flush": true, 00:25:31.239 "reset": true, 00:25:31.239 "nvme_admin": true, 00:25:31.239 "nvme_io": true, 00:25:31.239 "nvme_io_md": false, 00:25:31.239 "write_zeroes": true, 00:25:31.239 "zcopy": false, 00:25:31.239 "get_zone_info": false, 00:25:31.239 "zone_management": false, 00:25:31.239 "zone_append": false, 00:25:31.239 "compare": true, 00:25:31.239 "compare_and_write": false, 00:25:31.239 "abort": true, 00:25:31.239 "seek_hole": false, 00:25:31.239 "seek_data": false, 00:25:31.239 "copy": true, 00:25:31.239 "nvme_iov_md": false 00:25:31.239 }, 00:25:31.239 "driver_specific": { 00:25:31.239 "nvme": [ 00:25:31.239 { 00:25:31.239 "pci_address": "0000:00:11.0", 00:25:31.239 "trid": { 00:25:31.239 "trtype": "PCIe", 00:25:31.239 "traddr": "0000:00:11.0" 00:25:31.239 }, 00:25:31.239 "ctrlr_data": { 00:25:31.239 "cntlid": 0, 00:25:31.239 "vendor_id": "0x1b36", 00:25:31.239 "model_number": "QEMU NVMe Ctrl", 00:25:31.239 "serial_number": "12341", 00:25:31.239 "firmware_revision": "8.0.0", 00:25:31.239 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:31.239 "oacs": { 00:25:31.239 "security": 0, 00:25:31.239 "format": 1, 00:25:31.239 "firmware": 0, 00:25:31.239 "ns_manage": 1 00:25:31.239 }, 00:25:31.239 "multi_ctrlr": false, 00:25:31.239 "ana_reporting": false 00:25:31.239 }, 00:25:31.239 "vs": { 00:25:31.239 "nvme_version": "1.4" 00:25:31.239 }, 00:25:31.239 "ns_data": { 00:25:31.239 "id": 1, 00:25:31.239 "can_share": false 00:25:31.239 } 00:25:31.239 } 00:25:31.239 ], 00:25:31.239 "mp_policy": "active_passive" 00:25:31.239 } 00:25:31.239 } 00:25:31.239 ]' 00:25:31.239 11:39:36 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:31.239 11:39:36 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:31.239 11:39:36 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:31.497 11:39:37 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:31.497 11:39:37 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:31.497 11:39:37 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:25:31.497 11:39:37 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:25:31.497 11:39:37 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:31.497 11:39:37 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:25:31.497 11:39:37 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:31.497 11:39:37 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:31.756 11:39:37 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=4197aa6f-774a-4ba6-9436-79ebaf9349ec 00:25:31.756 11:39:37 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:25:31.756 11:39:37 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4197aa6f-774a-4ba6-9436-79ebaf9349ec 00:25:32.015 11:39:37 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:32.273 11:39:37 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=3e70d502-71cd-4152-b5e5-6a6a4ace8cc6 00:25:32.274 11:39:37 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3e70d502-71cd-4152-b5e5-6a6a4ace8cc6 00:25:32.533 11:39:38 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=bc1e832c-235a-4d59-ade8-4e1b2f37732d 00:25:32.533 11:39:38 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 bc1e832c-235a-4d59-ade8-4e1b2f37732d 00:25:32.533 11:39:38 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:25:32.533 11:39:38 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:32.533 11:39:38 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=bc1e832c-235a-4d59-ade8-4e1b2f37732d 00:25:32.533 11:39:38 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:25:32.533 11:39:38 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size bc1e832c-235a-4d59-ade8-4e1b2f37732d 00:25:32.533 11:39:38 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=bc1e832c-235a-4d59-ade8-4e1b2f37732d 00:25:32.533 11:39:38 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:32.533 11:39:38 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:32.533 11:39:38 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:32.533 11:39:38 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bc1e832c-235a-4d59-ade8-4e1b2f37732d 00:25:32.791 11:39:38 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:32.791 { 00:25:32.791 "name": "bc1e832c-235a-4d59-ade8-4e1b2f37732d", 00:25:32.791 "aliases": [ 00:25:32.791 "lvs/nvme0n1p0" 00:25:32.791 ], 00:25:32.791 "product_name": "Logical Volume", 00:25:32.791 "block_size": 4096, 00:25:32.791 "num_blocks": 26476544, 00:25:32.791 "uuid": "bc1e832c-235a-4d59-ade8-4e1b2f37732d", 00:25:32.791 "assigned_rate_limits": { 00:25:32.791 "rw_ios_per_sec": 0, 00:25:32.791 "rw_mbytes_per_sec": 0, 00:25:32.791 "r_mbytes_per_sec": 0, 00:25:32.791 "w_mbytes_per_sec": 0 00:25:32.792 }, 00:25:32.792 "claimed": false, 00:25:32.792 "zoned": false, 00:25:32.792 "supported_io_types": { 00:25:32.792 "read": true, 00:25:32.792 "write": true, 00:25:32.792 "unmap": true, 00:25:32.792 "flush": false, 00:25:32.792 "reset": true, 00:25:32.792 "nvme_admin": false, 00:25:32.792 "nvme_io": false, 00:25:32.792 "nvme_io_md": false, 00:25:32.792 "write_zeroes": true, 00:25:32.792 "zcopy": false, 00:25:32.792 "get_zone_info": false, 00:25:32.792 "zone_management": false, 00:25:32.792 "zone_append": false, 00:25:32.792 "compare": false, 00:25:32.792 "compare_and_write": false, 00:25:32.792 "abort": false, 00:25:32.792 "seek_hole": true, 00:25:32.792 "seek_data": true, 00:25:32.792 "copy": false, 00:25:32.792 "nvme_iov_md": false 00:25:32.792 }, 00:25:32.792 "driver_specific": { 00:25:32.792 "lvol": { 00:25:32.792 "lvol_store_uuid": "3e70d502-71cd-4152-b5e5-6a6a4ace8cc6", 00:25:32.792 "base_bdev": "nvme0n1", 00:25:32.792 "thin_provision": true, 00:25:32.792 "num_allocated_clusters": 0, 00:25:32.792 "snapshot": false, 00:25:32.792 "clone": false, 00:25:32.792 "esnap_clone": false 00:25:32.792 } 00:25:32.792 } 00:25:32.792 } 00:25:32.792 ]' 00:25:32.792 11:39:38 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:32.792 11:39:38 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:32.792 11:39:38 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:33.050 11:39:38 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:33.050 11:39:38 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:33.050 11:39:38 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:33.050 11:39:38 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:25:33.050 11:39:38 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:25:33.050 11:39:38 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:33.310 11:39:38 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:33.310 11:39:38 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:33.310 11:39:38 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size bc1e832c-235a-4d59-ade8-4e1b2f37732d 00:25:33.310 11:39:38 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=bc1e832c-235a-4d59-ade8-4e1b2f37732d 00:25:33.310 11:39:38 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:33.310 11:39:38 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:33.310 11:39:38 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:33.310 11:39:38 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bc1e832c-235a-4d59-ade8-4e1b2f37732d 00:25:33.575 11:39:39 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:33.575 { 00:25:33.575 "name": "bc1e832c-235a-4d59-ade8-4e1b2f37732d", 00:25:33.575 "aliases": [ 00:25:33.575 "lvs/nvme0n1p0" 00:25:33.575 ], 00:25:33.575 "product_name": "Logical Volume", 00:25:33.575 "block_size": 4096, 00:25:33.575 "num_blocks": 26476544, 00:25:33.575 "uuid": "bc1e832c-235a-4d59-ade8-4e1b2f37732d", 00:25:33.575 "assigned_rate_limits": { 00:25:33.575 "rw_ios_per_sec": 0, 00:25:33.575 "rw_mbytes_per_sec": 0, 00:25:33.575 "r_mbytes_per_sec": 0, 00:25:33.575 "w_mbytes_per_sec": 0 00:25:33.575 }, 00:25:33.575 "claimed": false, 00:25:33.576 "zoned": false, 00:25:33.576 "supported_io_types": { 00:25:33.576 "read": true, 00:25:33.576 "write": true, 00:25:33.576 "unmap": true, 00:25:33.576 "flush": false, 00:25:33.576 "reset": true, 00:25:33.576 "nvme_admin": false, 00:25:33.576 "nvme_io": false, 00:25:33.576 "nvme_io_md": false, 00:25:33.576 "write_zeroes": true, 00:25:33.576 "zcopy": false, 00:25:33.576 "get_zone_info": false, 00:25:33.576 "zone_management": false, 00:25:33.576 "zone_append": false, 00:25:33.576 "compare": false, 00:25:33.576 "compare_and_write": false, 00:25:33.576 "abort": false, 00:25:33.576 "seek_hole": true, 00:25:33.576 "seek_data": true, 00:25:33.576 "copy": false, 00:25:33.576 "nvme_iov_md": false 00:25:33.576 }, 00:25:33.576 "driver_specific": { 00:25:33.576 "lvol": { 00:25:33.576 "lvol_store_uuid": "3e70d502-71cd-4152-b5e5-6a6a4ace8cc6", 00:25:33.576 "base_bdev": "nvme0n1", 00:25:33.576 "thin_provision": true, 00:25:33.576 "num_allocated_clusters": 0, 00:25:33.576 "snapshot": false, 00:25:33.576 "clone": false, 00:25:33.576 "esnap_clone": false 00:25:33.576 } 00:25:33.576 } 00:25:33.576 } 00:25:33.576 ]' 00:25:33.576 11:39:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:33.576 11:39:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:33.577 11:39:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:33.577 11:39:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:33.578 11:39:39 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:33.588 11:39:39 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:33.588 11:39:39 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:25:33.588 11:39:39 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:33.848 11:39:39 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:25:33.849 11:39:39 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:25:33.849 11:39:39 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size bc1e832c-235a-4d59-ade8-4e1b2f37732d 00:25:33.849 11:39:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=bc1e832c-235a-4d59-ade8-4e1b2f37732d 00:25:33.849 11:39:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:33.849 11:39:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:25:33.849 11:39:39 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:25:33.849 11:39:39 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bc1e832c-235a-4d59-ade8-4e1b2f37732d 00:25:34.109 11:39:39 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:34.109 { 00:25:34.109 "name": "bc1e832c-235a-4d59-ade8-4e1b2f37732d", 00:25:34.109 "aliases": [ 00:25:34.109 "lvs/nvme0n1p0" 00:25:34.109 ], 00:25:34.109 "product_name": "Logical Volume", 00:25:34.109 "block_size": 4096, 00:25:34.109 "num_blocks": 26476544, 00:25:34.109 "uuid": "bc1e832c-235a-4d59-ade8-4e1b2f37732d", 00:25:34.109 "assigned_rate_limits": { 00:25:34.109 "rw_ios_per_sec": 0, 00:25:34.109 "rw_mbytes_per_sec": 0, 00:25:34.109 "r_mbytes_per_sec": 0, 00:25:34.109 "w_mbytes_per_sec": 0 00:25:34.109 }, 00:25:34.109 "claimed": false, 00:25:34.109 "zoned": false, 00:25:34.109 "supported_io_types": { 00:25:34.109 "read": true, 00:25:34.109 "write": true, 00:25:34.109 "unmap": true, 00:25:34.109 "flush": false, 00:25:34.109 "reset": true, 00:25:34.109 "nvme_admin": false, 00:25:34.109 "nvme_io": false, 00:25:34.109 "nvme_io_md": false, 00:25:34.109 "write_zeroes": true, 00:25:34.109 "zcopy": false, 00:25:34.109 "get_zone_info": false, 00:25:34.109 "zone_management": false, 00:25:34.109 "zone_append": false, 00:25:34.109 "compare": false, 00:25:34.109 "compare_and_write": false, 00:25:34.109 "abort": false, 00:25:34.109 "seek_hole": true, 00:25:34.109 "seek_data": true, 00:25:34.109 "copy": false, 00:25:34.109 "nvme_iov_md": false 00:25:34.109 }, 00:25:34.109 "driver_specific": { 00:25:34.109 "lvol": { 00:25:34.109 "lvol_store_uuid": "3e70d502-71cd-4152-b5e5-6a6a4ace8cc6", 00:25:34.109 "base_bdev": "nvme0n1", 00:25:34.109 "thin_provision": true, 00:25:34.109 "num_allocated_clusters": 0, 00:25:34.109 "snapshot": false, 00:25:34.109 "clone": false, 00:25:34.109 "esnap_clone": false 00:25:34.109 } 00:25:34.109 } 00:25:34.109 } 00:25:34.109 ]' 00:25:34.109 11:39:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:34.109 11:39:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:25:34.109 11:39:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:34.109 11:39:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:34.109 11:39:39 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:34.109 11:39:39 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:25:34.109 11:39:39 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:25:34.109 11:39:39 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d bc1e832c-235a-4d59-ade8-4e1b2f37732d -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:25:34.370 [2024-11-20 11:39:40.011437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.370 [2024-11-20 11:39:40.011508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:34.370 [2024-11-20 11:39:40.011529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:34.370 [2024-11-20 11:39:40.011540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.370 [2024-11-20 11:39:40.015219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.370 [2024-11-20 11:39:40.015442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:34.370 [2024-11-20 11:39:40.015489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.643 ms 00:25:34.370 [2024-11-20 11:39:40.015503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.370 [2024-11-20 11:39:40.015793] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:34.370 [2024-11-20 11:39:40.016919] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:34.370 [2024-11-20 11:39:40.016956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.370 [2024-11-20 11:39:40.016970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:34.370 [2024-11-20 11:39:40.016985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 00:25:34.370 [2024-11-20 11:39:40.016997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.370 [2024-11-20 11:39:40.017099] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 963441e8-3606-45d4-aa8c-a0a9c7a666bb 00:25:34.370 [2024-11-20 11:39:40.018747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.370 [2024-11-20 11:39:40.018788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:34.370 [2024-11-20 11:39:40.018804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:25:34.370 [2024-11-20 11:39:40.018818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.370 [2024-11-20 11:39:40.026761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.370 [2024-11-20 11:39:40.026841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:34.370 [2024-11-20 11:39:40.026865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.847 ms 00:25:34.370 [2024-11-20 11:39:40.026887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.370 [2024-11-20 11:39:40.027119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.370 [2024-11-20 11:39:40.027144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:34.370 [2024-11-20 11:39:40.027159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:25:34.370 [2024-11-20 11:39:40.027182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.370 [2024-11-20 11:39:40.027238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.370 [2024-11-20 11:39:40.027259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:34.370 [2024-11-20 11:39:40.027273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:34.370 [2024-11-20 11:39:40.027290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.370 [2024-11-20 11:39:40.027346] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:34.370 [2024-11-20 11:39:40.034043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.370 [2024-11-20 11:39:40.034304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:34.370 [2024-11-20 11:39:40.034362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.702 ms 00:25:34.370 [2024-11-20 11:39:40.034377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.370 [2024-11-20 11:39:40.034519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.370 [2024-11-20 11:39:40.034537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:34.370 [2024-11-20 11:39:40.034553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:34.370 [2024-11-20 11:39:40.034587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.370 [2024-11-20 11:39:40.034634] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:34.370 [2024-11-20 11:39:40.034793] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:34.370 [2024-11-20 11:39:40.034818] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:34.370 [2024-11-20 11:39:40.034836] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:34.370 [2024-11-20 11:39:40.034855] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:34.370 [2024-11-20 11:39:40.034870] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:34.370 [2024-11-20 11:39:40.034887] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:34.370 [2024-11-20 11:39:40.034900] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:34.370 [2024-11-20 11:39:40.034916] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:34.370 [2024-11-20 11:39:40.034931] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:34.370 [2024-11-20 11:39:40.034947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.370 [2024-11-20 11:39:40.034961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:34.370 [2024-11-20 11:39:40.034976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:25:34.370 [2024-11-20 11:39:40.034989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.370 [2024-11-20 11:39:40.035104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.370 [2024-11-20 11:39:40.035118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:34.371 [2024-11-20 11:39:40.035134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:34.371 [2024-11-20 11:39:40.035146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.371 [2024-11-20 11:39:40.035291] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:34.371 [2024-11-20 11:39:40.035305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:34.371 [2024-11-20 11:39:40.035321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:34.371 [2024-11-20 11:39:40.035335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.371 [2024-11-20 11:39:40.035351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:34.371 [2024-11-20 11:39:40.035363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:34.371 [2024-11-20 11:39:40.035377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:34.371 [2024-11-20 11:39:40.035389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:34.371 [2024-11-20 11:39:40.035404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:34.371 [2024-11-20 11:39:40.035416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:34.371 [2024-11-20 11:39:40.035431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:34.371 [2024-11-20 11:39:40.035442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:34.371 [2024-11-20 11:39:40.035458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:34.371 [2024-11-20 11:39:40.035482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:34.371 [2024-11-20 11:39:40.035499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:34.371 [2024-11-20 11:39:40.035510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.371 [2024-11-20 11:39:40.035528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:34.371 [2024-11-20 11:39:40.035540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:34.371 [2024-11-20 11:39:40.035554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.371 [2024-11-20 11:39:40.035566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:34.371 [2024-11-20 11:39:40.035583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:34.371 [2024-11-20 11:39:40.035594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.371 [2024-11-20 11:39:40.035609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:34.371 [2024-11-20 11:39:40.035620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:34.371 [2024-11-20 11:39:40.035637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.371 [2024-11-20 11:39:40.035652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:34.371 [2024-11-20 11:39:40.035667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:34.371 [2024-11-20 11:39:40.035678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.371 [2024-11-20 11:39:40.035692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:34.371 [2024-11-20 11:39:40.035704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:34.371 [2024-11-20 11:39:40.035732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.371 [2024-11-20 11:39:40.035742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:34.371 [2024-11-20 11:39:40.035759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:34.371 [2024-11-20 11:39:40.035769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:34.371 [2024-11-20 11:39:40.035782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:34.371 [2024-11-20 11:39:40.035792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:34.371 [2024-11-20 11:39:40.035805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:34.371 [2024-11-20 11:39:40.035816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:34.371 [2024-11-20 11:39:40.035829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:34.371 [2024-11-20 11:39:40.035839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.371 [2024-11-20 11:39:40.035852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:34.371 [2024-11-20 11:39:40.035862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:34.371 [2024-11-20 11:39:40.035875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.371 [2024-11-20 11:39:40.035885] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:34.371 [2024-11-20 11:39:40.035916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:34.371 [2024-11-20 11:39:40.035928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:34.371 [2024-11-20 11:39:40.035944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.371 [2024-11-20 11:39:40.035956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:34.371 [2024-11-20 11:39:40.035976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:34.371 [2024-11-20 11:39:40.035987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:34.371 [2024-11-20 11:39:40.036003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:34.371 [2024-11-20 11:39:40.036014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:34.371 [2024-11-20 11:39:40.036029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:34.371 [2024-11-20 11:39:40.036045] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:34.371 [2024-11-20 11:39:40.036064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:34.371 [2024-11-20 11:39:40.036079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:34.371 [2024-11-20 11:39:40.036095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:34.371 [2024-11-20 11:39:40.036109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:34.371 [2024-11-20 11:39:40.036125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:34.371 [2024-11-20 11:39:40.036137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:34.371 [2024-11-20 11:39:40.036153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:34.371 [2024-11-20 11:39:40.036166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:34.371 [2024-11-20 11:39:40.036182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:34.371 [2024-11-20 11:39:40.036194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:34.371 [2024-11-20 11:39:40.036222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:34.371 [2024-11-20 11:39:40.036236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:34.372 [2024-11-20 11:39:40.036252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:34.372 [2024-11-20 11:39:40.036265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:34.372 [2024-11-20 11:39:40.036281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:34.372 [2024-11-20 11:39:40.036294] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:34.372 [2024-11-20 11:39:40.036320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:34.372 [2024-11-20 11:39:40.036334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:34.372 [2024-11-20 11:39:40.036350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:34.372 [2024-11-20 11:39:40.036363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:34.372 [2024-11-20 11:39:40.036379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:34.372 [2024-11-20 11:39:40.036393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.372 [2024-11-20 11:39:40.036409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:34.372 [2024-11-20 11:39:40.036425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 00:25:34.372 [2024-11-20 11:39:40.036441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.372 [2024-11-20 11:39:40.036555] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:34.372 [2024-11-20 11:39:40.036579] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:37.660 [2024-11-20 11:39:42.821265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.660 [2024-11-20 11:39:42.821357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:37.660 [2024-11-20 11:39:42.821378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2784.694 ms 00:25:37.660 [2024-11-20 11:39:42.821394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.660 [2024-11-20 11:39:42.862616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.660 [2024-11-20 11:39:42.862851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:37.660 [2024-11-20 11:39:42.862895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.708 ms 00:25:37.660 [2024-11-20 11:39:42.862910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.660 [2024-11-20 11:39:42.863096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.660 [2024-11-20 11:39:42.863113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:37.660 [2024-11-20 11:39:42.863126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:37.660 [2024-11-20 11:39:42.863143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.660 [2024-11-20 11:39:42.920355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.660 [2024-11-20 11:39:42.920416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:37.660 [2024-11-20 11:39:42.920434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.147 ms 00:25:37.660 [2024-11-20 11:39:42.920449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.660 [2024-11-20 11:39:42.920579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.660 [2024-11-20 11:39:42.920610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:37.660 [2024-11-20 11:39:42.920622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:37.660 [2024-11-20 11:39:42.920652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.660 [2024-11-20 11:39:42.921115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.660 [2024-11-20 11:39:42.921153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:37.660 [2024-11-20 11:39:42.921167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:25:37.660 [2024-11-20 11:39:42.921182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.660 [2024-11-20 11:39:42.921316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.660 [2024-11-20 11:39:42.921332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:37.660 [2024-11-20 11:39:42.921344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:25:37.660 [2024-11-20 11:39:42.921362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.660 [2024-11-20 11:39:42.944132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.660 [2024-11-20 11:39:42.944346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:37.660 [2024-11-20 11:39:42.944388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.708 ms 00:25:37.660 [2024-11-20 11:39:42.944405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.660 [2024-11-20 11:39:42.958724] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:37.660 [2024-11-20 11:39:42.975743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.660 [2024-11-20 11:39:42.975809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:37.660 [2024-11-20 11:39:42.975828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.121 ms 00:25:37.660 [2024-11-20 11:39:42.975838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.660 [2024-11-20 11:39:43.061371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.660 [2024-11-20 11:39:43.061441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:37.660 [2024-11-20 11:39:43.061460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.398 ms 00:25:37.660 [2024-11-20 11:39:43.061485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.660 [2024-11-20 11:39:43.061753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.660 [2024-11-20 11:39:43.061769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:37.660 [2024-11-20 11:39:43.061786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:25:37.660 [2024-11-20 11:39:43.061796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.660 [2024-11-20 11:39:43.099228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.660 [2024-11-20 11:39:43.099288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:37.660 [2024-11-20 11:39:43.099307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.385 ms 00:25:37.660 [2024-11-20 11:39:43.099319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.660 [2024-11-20 11:39:43.138964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.660 [2024-11-20 11:39:43.139024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:37.660 [2024-11-20 11:39:43.139044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.534 ms 00:25:37.660 [2024-11-20 11:39:43.139072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.660 [2024-11-20 11:39:43.139963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.660 [2024-11-20 11:39:43.139990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:37.661 [2024-11-20 11:39:43.140006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:25:37.661 [2024-11-20 11:39:43.140018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.661 [2024-11-20 11:39:43.249819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.661 [2024-11-20 11:39:43.250054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:37.661 [2024-11-20 11:39:43.250095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.751 ms 00:25:37.661 [2024-11-20 11:39:43.250108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.661 [2024-11-20 11:39:43.292868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.661 [2024-11-20 11:39:43.293083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:37.661 [2024-11-20 11:39:43.293116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.586 ms 00:25:37.661 [2024-11-20 11:39:43.293128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.661 [2024-11-20 11:39:43.333936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.661 [2024-11-20 11:39:43.334144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:37.661 [2024-11-20 11:39:43.334176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.647 ms 00:25:37.661 [2024-11-20 11:39:43.334188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.661 [2024-11-20 11:39:43.373988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.661 [2024-11-20 11:39:43.374041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:37.661 [2024-11-20 11:39:43.374060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.686 ms 00:25:37.661 [2024-11-20 11:39:43.374088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.661 [2024-11-20 11:39:43.374195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.661 [2024-11-20 11:39:43.374214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:37.661 [2024-11-20 11:39:43.374232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:37.661 [2024-11-20 11:39:43.374242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.661 [2024-11-20 11:39:43.374334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.661 [2024-11-20 11:39:43.374346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:37.661 [2024-11-20 11:39:43.374359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:37.661 [2024-11-20 11:39:43.374369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.661 [2024-11-20 11:39:43.375652] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:37.661 [2024-11-20 11:39:43.380649] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3363.813 ms, result 0 00:25:37.661 [2024-11-20 11:39:43.381572] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:37.661 { 00:25:37.661 "name": "ftl0", 00:25:37.661 "uuid": "963441e8-3606-45d4-aa8c-a0a9c7a666bb" 00:25:37.661 } 00:25:37.661 11:39:43 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:25:37.661 11:39:43 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:25:37.661 11:39:43 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:37.661 11:39:43 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:25:37.661 11:39:43 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:37.661 11:39:43 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:37.661 11:39:43 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:38.228 11:39:43 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:25:38.228 [ 00:25:38.228 { 00:25:38.228 "name": "ftl0", 00:25:38.228 "aliases": [ 00:25:38.228 "963441e8-3606-45d4-aa8c-a0a9c7a666bb" 00:25:38.228 ], 00:25:38.228 "product_name": "FTL disk", 00:25:38.228 "block_size": 4096, 00:25:38.228 "num_blocks": 23592960, 00:25:38.228 "uuid": "963441e8-3606-45d4-aa8c-a0a9c7a666bb", 00:25:38.228 "assigned_rate_limits": { 00:25:38.228 "rw_ios_per_sec": 0, 00:25:38.228 "rw_mbytes_per_sec": 0, 00:25:38.228 "r_mbytes_per_sec": 0, 00:25:38.228 "w_mbytes_per_sec": 0 00:25:38.228 }, 00:25:38.228 "claimed": false, 00:25:38.228 "zoned": false, 00:25:38.228 "supported_io_types": { 00:25:38.228 "read": true, 00:25:38.228 "write": true, 00:25:38.228 "unmap": true, 00:25:38.228 "flush": true, 00:25:38.228 "reset": false, 00:25:38.228 "nvme_admin": false, 00:25:38.228 "nvme_io": false, 00:25:38.228 "nvme_io_md": false, 00:25:38.228 "write_zeroes": true, 00:25:38.228 "zcopy": false, 00:25:38.228 "get_zone_info": false, 00:25:38.228 "zone_management": false, 00:25:38.228 "zone_append": false, 00:25:38.228 "compare": false, 00:25:38.228 "compare_and_write": false, 00:25:38.228 "abort": false, 00:25:38.228 "seek_hole": false, 00:25:38.228 "seek_data": false, 00:25:38.228 "copy": false, 00:25:38.228 "nvme_iov_md": false 00:25:38.228 }, 00:25:38.228 "driver_specific": { 00:25:38.228 "ftl": { 00:25:38.228 "base_bdev": "bc1e832c-235a-4d59-ade8-4e1b2f37732d", 00:25:38.228 "cache": "nvc0n1p0" 00:25:38.228 } 00:25:38.228 } 00:25:38.228 } 00:25:38.228 ] 00:25:38.228 11:39:43 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:25:38.228 11:39:43 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:25:38.228 11:39:43 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:38.795 11:39:44 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:25:38.795 11:39:44 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:25:39.053 11:39:44 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:25:39.053 { 00:25:39.053 "name": "ftl0", 00:25:39.053 "aliases": [ 00:25:39.053 "963441e8-3606-45d4-aa8c-a0a9c7a666bb" 00:25:39.053 ], 00:25:39.053 "product_name": "FTL disk", 00:25:39.053 "block_size": 4096, 00:25:39.053 "num_blocks": 23592960, 00:25:39.053 "uuid": "963441e8-3606-45d4-aa8c-a0a9c7a666bb", 00:25:39.053 "assigned_rate_limits": { 00:25:39.053 "rw_ios_per_sec": 0, 00:25:39.053 "rw_mbytes_per_sec": 0, 00:25:39.053 "r_mbytes_per_sec": 0, 00:25:39.053 "w_mbytes_per_sec": 0 00:25:39.053 }, 00:25:39.053 "claimed": false, 00:25:39.053 "zoned": false, 00:25:39.053 "supported_io_types": { 00:25:39.053 "read": true, 00:25:39.053 "write": true, 00:25:39.053 "unmap": true, 00:25:39.053 "flush": true, 00:25:39.053 "reset": false, 00:25:39.053 "nvme_admin": false, 00:25:39.053 "nvme_io": false, 00:25:39.053 "nvme_io_md": false, 00:25:39.053 "write_zeroes": true, 00:25:39.053 "zcopy": false, 00:25:39.053 "get_zone_info": false, 00:25:39.053 "zone_management": false, 00:25:39.053 "zone_append": false, 00:25:39.053 "compare": false, 00:25:39.053 "compare_and_write": false, 00:25:39.053 "abort": false, 00:25:39.053 "seek_hole": false, 00:25:39.053 "seek_data": false, 00:25:39.053 "copy": false, 00:25:39.053 "nvme_iov_md": false 00:25:39.053 }, 00:25:39.053 "driver_specific": { 00:25:39.053 "ftl": { 00:25:39.053 "base_bdev": "bc1e832c-235a-4d59-ade8-4e1b2f37732d", 00:25:39.053 "cache": "nvc0n1p0" 00:25:39.053 } 00:25:39.053 } 00:25:39.053 } 00:25:39.053 ]' 00:25:39.053 11:39:44 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:25:39.053 11:39:44 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:25:39.053 11:39:44 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:39.312 [2024-11-20 11:39:44.840733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.312 [2024-11-20 11:39:44.841007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:39.312 [2024-11-20 11:39:44.841038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:39.312 [2024-11-20 11:39:44.841058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.312 [2024-11-20 11:39:44.841113] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:39.312 [2024-11-20 11:39:44.846055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.312 [2024-11-20 11:39:44.846092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:39.312 [2024-11-20 11:39:44.846117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.918 ms 00:25:39.312 [2024-11-20 11:39:44.846130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.312 [2024-11-20 11:39:44.846709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.312 [2024-11-20 11:39:44.846726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:39.312 [2024-11-20 11:39:44.846740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:25:39.312 [2024-11-20 11:39:44.846750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.312 [2024-11-20 11:39:44.849725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.312 [2024-11-20 11:39:44.849753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:39.312 [2024-11-20 11:39:44.849769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.938 ms 00:25:39.312 [2024-11-20 11:39:44.849780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.312 [2024-11-20 11:39:44.855891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.312 [2024-11-20 11:39:44.856031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:39.312 [2024-11-20 11:39:44.856057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.052 ms 00:25:39.312 [2024-11-20 11:39:44.856068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.312 [2024-11-20 11:39:44.895813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.312 [2024-11-20 11:39:44.895868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:39.312 [2024-11-20 11:39:44.895892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.650 ms 00:25:39.312 [2024-11-20 11:39:44.895902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.312 [2024-11-20 11:39:44.918804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.312 [2024-11-20 11:39:44.918868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:39.312 [2024-11-20 11:39:44.918890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.795 ms 00:25:39.312 [2024-11-20 11:39:44.918905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.312 [2024-11-20 11:39:44.919126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.312 [2024-11-20 11:39:44.919141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:39.312 [2024-11-20 11:39:44.919155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:25:39.312 [2024-11-20 11:39:44.919166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.312 [2024-11-20 11:39:44.958987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.312 [2024-11-20 11:39:44.959047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:39.312 [2024-11-20 11:39:44.959066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.777 ms 00:25:39.312 [2024-11-20 11:39:44.959077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.312 [2024-11-20 11:39:44.998083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.312 [2024-11-20 11:39:44.998312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:39.312 [2024-11-20 11:39:44.998348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.890 ms 00:25:39.312 [2024-11-20 11:39:44.998360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.312 [2024-11-20 11:39:45.036549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.312 [2024-11-20 11:39:45.036740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:39.312 [2024-11-20 11:39:45.036769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.046 ms 00:25:39.312 [2024-11-20 11:39:45.036780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.572 [2024-11-20 11:39:45.076882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.572 [2024-11-20 11:39:45.076940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:39.572 [2024-11-20 11:39:45.076961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.916 ms 00:25:39.572 [2024-11-20 11:39:45.076973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.572 [2024-11-20 11:39:45.077105] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:39.572 [2024-11-20 11:39:45.077127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:39.572 [2024-11-20 11:39:45.077995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:39.573 [2024-11-20 11:39:45.078634] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:39.573 [2024-11-20 11:39:45.078651] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 963441e8-3606-45d4-aa8c-a0a9c7a666bb 00:25:39.573 [2024-11-20 11:39:45.078663] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:39.573 [2024-11-20 11:39:45.078676] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:39.573 [2024-11-20 11:39:45.078687] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:39.573 [2024-11-20 11:39:45.078701] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:39.573 [2024-11-20 11:39:45.078715] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:39.573 [2024-11-20 11:39:45.078729] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:39.573 [2024-11-20 11:39:45.078740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:39.573 [2024-11-20 11:39:45.078752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:39.573 [2024-11-20 11:39:45.078762] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:39.573 [2024-11-20 11:39:45.078776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.573 [2024-11-20 11:39:45.078787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:39.573 [2024-11-20 11:39:45.078802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.674 ms 00:25:39.573 [2024-11-20 11:39:45.078812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.573 [2024-11-20 11:39:45.101573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.573 [2024-11-20 11:39:45.101627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:39.573 [2024-11-20 11:39:45.101655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.710 ms 00:25:39.573 [2024-11-20 11:39:45.101668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.573 [2024-11-20 11:39:45.102355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.573 [2024-11-20 11:39:45.102370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:39.573 [2024-11-20 11:39:45.102386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:25:39.573 [2024-11-20 11:39:45.102398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.573 [2024-11-20 11:39:45.180844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.573 [2024-11-20 11:39:45.180911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:39.573 [2024-11-20 11:39:45.180931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.573 [2024-11-20 11:39:45.180943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.573 [2024-11-20 11:39:45.181114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.573 [2024-11-20 11:39:45.181129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:39.573 [2024-11-20 11:39:45.181153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.573 [2024-11-20 11:39:45.181180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.573 [2024-11-20 11:39:45.181268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.573 [2024-11-20 11:39:45.181283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:39.573 [2024-11-20 11:39:45.181305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.573 [2024-11-20 11:39:45.181317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.573 [2024-11-20 11:39:45.181355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.573 [2024-11-20 11:39:45.181367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:39.573 [2024-11-20 11:39:45.181381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.573 [2024-11-20 11:39:45.181392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.573 [2024-11-20 11:39:45.325999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.573 [2024-11-20 11:39:45.326084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:39.573 [2024-11-20 11:39:45.326104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.573 [2024-11-20 11:39:45.326117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.833 [2024-11-20 11:39:45.436300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.833 [2024-11-20 11:39:45.436365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:39.833 [2024-11-20 11:39:45.436383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.833 [2024-11-20 11:39:45.436395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.833 [2024-11-20 11:39:45.436542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.833 [2024-11-20 11:39:45.436556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:39.833 [2024-11-20 11:39:45.436592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.833 [2024-11-20 11:39:45.436606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.833 [2024-11-20 11:39:45.436671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.833 [2024-11-20 11:39:45.436685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:39.833 [2024-11-20 11:39:45.436699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.833 [2024-11-20 11:39:45.436709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.833 [2024-11-20 11:39:45.436849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.833 [2024-11-20 11:39:45.436869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:39.833 [2024-11-20 11:39:45.436882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.833 [2024-11-20 11:39:45.436892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.833 [2024-11-20 11:39:45.436978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.833 [2024-11-20 11:39:45.436996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:39.833 [2024-11-20 11:39:45.437011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.833 [2024-11-20 11:39:45.437021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.833 [2024-11-20 11:39:45.437082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.833 [2024-11-20 11:39:45.437094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:39.833 [2024-11-20 11:39:45.437112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.833 [2024-11-20 11:39:45.437123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.833 [2024-11-20 11:39:45.437197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.833 [2024-11-20 11:39:45.437213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:39.833 [2024-11-20 11:39:45.437227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.833 [2024-11-20 11:39:45.437239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.833 [2024-11-20 11:39:45.437440] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 596.693 ms, result 0 00:25:39.833 true 00:25:39.833 11:39:45 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78945 00:25:39.833 11:39:45 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78945 ']' 00:25:39.833 11:39:45 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78945 00:25:39.833 11:39:45 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:39.833 11:39:45 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:39.833 11:39:45 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78945 00:25:39.833 11:39:45 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:39.833 killing process with pid 78945 00:25:39.833 11:39:45 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:39.833 11:39:45 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78945' 00:25:39.833 11:39:45 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78945 00:25:39.833 11:39:45 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78945 00:25:45.180 11:39:50 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:25:46.559 65536+0 records in 00:25:46.559 65536+0 records out 00:25:46.559 268435456 bytes (268 MB, 256 MiB) copied, 1.12607 s, 238 MB/s 00:25:46.559 11:39:52 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:46.559 [2024-11-20 11:39:52.106637] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:25:46.559 [2024-11-20 11:39:52.106755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79154 ] 00:25:46.559 [2024-11-20 11:39:52.288649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.818 [2024-11-20 11:39:52.452690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.077 [2024-11-20 11:39:52.824908] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:47.077 [2024-11-20 11:39:52.824969] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:47.337 [2024-11-20 11:39:52.990890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.337 [2024-11-20 11:39:52.990941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:47.337 [2024-11-20 11:39:52.990973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:47.337 [2024-11-20 11:39:52.990984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.337 [2024-11-20 11:39:52.994407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.337 [2024-11-20 11:39:52.994444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:47.337 [2024-11-20 11:39:52.994456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.400 ms 00:25:47.337 [2024-11-20 11:39:52.994466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.337 [2024-11-20 11:39:52.994598] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:47.337 [2024-11-20 11:39:52.995630] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:47.337 [2024-11-20 11:39:52.995655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.337 [2024-11-20 11:39:52.995666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:47.337 [2024-11-20 11:39:52.995677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:25:47.337 [2024-11-20 11:39:52.995687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.337 [2024-11-20 11:39:52.997124] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:47.337 [2024-11-20 11:39:53.016310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.337 [2024-11-20 11:39:53.016353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:47.337 [2024-11-20 11:39:53.016367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.187 ms 00:25:47.337 [2024-11-20 11:39:53.016378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.337 [2024-11-20 11:39:53.016499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.337 [2024-11-20 11:39:53.016532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:47.337 [2024-11-20 11:39:53.016543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:25:47.337 [2024-11-20 11:39:53.016553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.337 [2024-11-20 11:39:53.023158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.337 [2024-11-20 11:39:53.023184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:47.337 [2024-11-20 11:39:53.023195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.559 ms 00:25:47.337 [2024-11-20 11:39:53.023206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.337 [2024-11-20 11:39:53.023317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.337 [2024-11-20 11:39:53.023332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:47.337 [2024-11-20 11:39:53.023343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:25:47.337 [2024-11-20 11:39:53.023354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.337 [2024-11-20 11:39:53.023384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.337 [2024-11-20 11:39:53.023399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:47.337 [2024-11-20 11:39:53.023410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:47.337 [2024-11-20 11:39:53.023419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.337 [2024-11-20 11:39:53.023444] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:47.337 [2024-11-20 11:39:53.028598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.337 [2024-11-20 11:39:53.028626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:47.337 [2024-11-20 11:39:53.028639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.161 ms 00:25:47.337 [2024-11-20 11:39:53.028648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.337 [2024-11-20 11:39:53.028717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.337 [2024-11-20 11:39:53.028730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:47.337 [2024-11-20 11:39:53.028741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:47.337 [2024-11-20 11:39:53.028751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.337 [2024-11-20 11:39:53.028771] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:47.337 [2024-11-20 11:39:53.028804] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:47.337 [2024-11-20 11:39:53.028840] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:47.337 [2024-11-20 11:39:53.028857] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:47.337 [2024-11-20 11:39:53.028949] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:47.337 [2024-11-20 11:39:53.028962] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:47.337 [2024-11-20 11:39:53.028975] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:47.337 [2024-11-20 11:39:53.028988] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:47.337 [2024-11-20 11:39:53.029007] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:47.337 [2024-11-20 11:39:53.029018] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:47.337 [2024-11-20 11:39:53.029028] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:47.337 [2024-11-20 11:39:53.029038] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:47.337 [2024-11-20 11:39:53.029047] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:47.337 [2024-11-20 11:39:53.029058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.337 [2024-11-20 11:39:53.029068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:47.337 [2024-11-20 11:39:53.029078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:25:47.337 [2024-11-20 11:39:53.029088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.337 [2024-11-20 11:39:53.029177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.338 [2024-11-20 11:39:53.029189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:47.338 [2024-11-20 11:39:53.029205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:47.338 [2024-11-20 11:39:53.029215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.338 [2024-11-20 11:39:53.029312] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:47.338 [2024-11-20 11:39:53.029325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:47.338 [2024-11-20 11:39:53.029336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:47.338 [2024-11-20 11:39:53.029346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.338 [2024-11-20 11:39:53.029357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:47.338 [2024-11-20 11:39:53.029367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:47.338 [2024-11-20 11:39:53.029377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:47.338 [2024-11-20 11:39:53.029386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:47.338 [2024-11-20 11:39:53.029396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:47.338 [2024-11-20 11:39:53.029405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:47.338 [2024-11-20 11:39:53.029415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:47.338 [2024-11-20 11:39:53.029424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:47.338 [2024-11-20 11:39:53.029433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:47.338 [2024-11-20 11:39:53.029457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:47.338 [2024-11-20 11:39:53.029467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:47.338 [2024-11-20 11:39:53.029487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.338 [2024-11-20 11:39:53.029497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:47.338 [2024-11-20 11:39:53.029506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:47.338 [2024-11-20 11:39:53.029516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.338 [2024-11-20 11:39:53.029525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:47.338 [2024-11-20 11:39:53.029535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:47.338 [2024-11-20 11:39:53.029544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.338 [2024-11-20 11:39:53.029553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:47.338 [2024-11-20 11:39:53.029562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:47.338 [2024-11-20 11:39:53.029571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.338 [2024-11-20 11:39:53.029580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:47.338 [2024-11-20 11:39:53.029590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:47.338 [2024-11-20 11:39:53.029599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.338 [2024-11-20 11:39:53.029608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:47.338 [2024-11-20 11:39:53.029617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:47.338 [2024-11-20 11:39:53.029626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.338 [2024-11-20 11:39:53.029635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:47.338 [2024-11-20 11:39:53.029644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:47.338 [2024-11-20 11:39:53.029654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:47.338 [2024-11-20 11:39:53.029663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:47.338 [2024-11-20 11:39:53.029672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:47.338 [2024-11-20 11:39:53.029685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:47.338 [2024-11-20 11:39:53.029694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:47.338 [2024-11-20 11:39:53.029703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:47.338 [2024-11-20 11:39:53.029712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.338 [2024-11-20 11:39:53.029721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:47.338 [2024-11-20 11:39:53.029730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:47.338 [2024-11-20 11:39:53.029739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.338 [2024-11-20 11:39:53.029748] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:47.338 [2024-11-20 11:39:53.029758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:47.338 [2024-11-20 11:39:53.029768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:47.338 [2024-11-20 11:39:53.029785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.338 [2024-11-20 11:39:53.029795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:47.338 [2024-11-20 11:39:53.029805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:47.338 [2024-11-20 11:39:53.029814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:47.338 [2024-11-20 11:39:53.029824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:47.338 [2024-11-20 11:39:53.029833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:47.338 [2024-11-20 11:39:53.029842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:47.338 [2024-11-20 11:39:53.029853] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:47.338 [2024-11-20 11:39:53.029865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:47.338 [2024-11-20 11:39:53.029876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:47.338 [2024-11-20 11:39:53.029887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:47.338 [2024-11-20 11:39:53.029897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:47.338 [2024-11-20 11:39:53.029908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:47.338 [2024-11-20 11:39:53.029919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:47.338 [2024-11-20 11:39:53.029929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:47.338 [2024-11-20 11:39:53.029939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:47.338 [2024-11-20 11:39:53.029949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:47.338 [2024-11-20 11:39:53.029959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:47.338 [2024-11-20 11:39:53.029969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:47.338 [2024-11-20 11:39:53.029980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:47.338 [2024-11-20 11:39:53.029990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:47.338 [2024-11-20 11:39:53.030000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:47.338 [2024-11-20 11:39:53.030012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:47.338 [2024-11-20 11:39:53.030023] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:47.338 [2024-11-20 11:39:53.030034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:47.338 [2024-11-20 11:39:53.030046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:47.338 [2024-11-20 11:39:53.030056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:47.338 [2024-11-20 11:39:53.030066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:47.338 [2024-11-20 11:39:53.030077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:47.338 [2024-11-20 11:39:53.030088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.338 [2024-11-20 11:39:53.030098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:47.338 [2024-11-20 11:39:53.030115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:25:47.338 [2024-11-20 11:39:53.030125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.338 [2024-11-20 11:39:53.068758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.338 [2024-11-20 11:39:53.068806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:47.338 [2024-11-20 11:39:53.068822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.569 ms 00:25:47.338 [2024-11-20 11:39:53.068832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.338 [2024-11-20 11:39:53.069018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.339 [2024-11-20 11:39:53.069042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:47.339 [2024-11-20 11:39:53.069054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:47.339 [2024-11-20 11:39:53.069064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.598 [2024-11-20 11:39:53.126691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.598 [2024-11-20 11:39:53.126741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:47.598 [2024-11-20 11:39:53.126775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.597 ms 00:25:47.598 [2024-11-20 11:39:53.126791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.598 [2024-11-20 11:39:53.126929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.598 [2024-11-20 11:39:53.126943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:47.598 [2024-11-20 11:39:53.126954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:47.598 [2024-11-20 11:39:53.126964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.598 [2024-11-20 11:39:53.127396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.598 [2024-11-20 11:39:53.127410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:47.598 [2024-11-20 11:39:53.127422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:25:47.598 [2024-11-20 11:39:53.127438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.598 [2024-11-20 11:39:53.127572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.598 [2024-11-20 11:39:53.127587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:47.598 [2024-11-20 11:39:53.127598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:25:47.598 [2024-11-20 11:39:53.127608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.598 [2024-11-20 11:39:53.147223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.598 [2024-11-20 11:39:53.147269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:47.598 [2024-11-20 11:39:53.147284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.589 ms 00:25:47.598 [2024-11-20 11:39:53.147295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.598 [2024-11-20 11:39:53.167038] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:47.598 [2024-11-20 11:39:53.167078] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:47.598 [2024-11-20 11:39:53.167110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.598 [2024-11-20 11:39:53.167122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:47.598 [2024-11-20 11:39:53.167135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.667 ms 00:25:47.598 [2024-11-20 11:39:53.167145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.598 [2024-11-20 11:39:53.197914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.598 [2024-11-20 11:39:53.197974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:47.598 [2024-11-20 11:39:53.198018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.677 ms 00:25:47.598 [2024-11-20 11:39:53.198028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.598 [2024-11-20 11:39:53.217221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.598 [2024-11-20 11:39:53.217258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:47.598 [2024-11-20 11:39:53.217272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.088 ms 00:25:47.598 [2024-11-20 11:39:53.217283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.598 [2024-11-20 11:39:53.235795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.598 [2024-11-20 11:39:53.235830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:47.598 [2024-11-20 11:39:53.235843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.428 ms 00:25:47.598 [2024-11-20 11:39:53.235853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.598 [2024-11-20 11:39:53.236701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.598 [2024-11-20 11:39:53.236726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:47.598 [2024-11-20 11:39:53.236738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:25:47.598 [2024-11-20 11:39:53.236749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.598 [2024-11-20 11:39:53.327453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.598 [2024-11-20 11:39:53.327523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:47.598 [2024-11-20 11:39:53.327540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.673 ms 00:25:47.598 [2024-11-20 11:39:53.327552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.598 [2024-11-20 11:39:53.339927] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:47.598 [2024-11-20 11:39:53.356673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.598 [2024-11-20 11:39:53.356720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:47.598 [2024-11-20 11:39:53.356736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.995 ms 00:25:47.598 [2024-11-20 11:39:53.356747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.598 [2024-11-20 11:39:53.356894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.858 [2024-11-20 11:39:53.356912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:47.858 [2024-11-20 11:39:53.356924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:47.858 [2024-11-20 11:39:53.356934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.858 [2024-11-20 11:39:53.356990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.858 [2024-11-20 11:39:53.357001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:47.858 [2024-11-20 11:39:53.357013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:47.858 [2024-11-20 11:39:53.357023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.858 [2024-11-20 11:39:53.357050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.858 [2024-11-20 11:39:53.357061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:47.858 [2024-11-20 11:39:53.357074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:47.858 [2024-11-20 11:39:53.357085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.858 [2024-11-20 11:39:53.357121] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:47.858 [2024-11-20 11:39:53.357141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.858 [2024-11-20 11:39:53.357151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:47.858 [2024-11-20 11:39:53.357162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:47.858 [2024-11-20 11:39:53.357172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.858 [2024-11-20 11:39:53.397708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.858 [2024-11-20 11:39:53.397771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:47.858 [2024-11-20 11:39:53.397805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.509 ms 00:25:47.858 [2024-11-20 11:39:53.397818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.858 [2024-11-20 11:39:53.397982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.858 [2024-11-20 11:39:53.398000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:47.858 [2024-11-20 11:39:53.398014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:25:47.858 [2024-11-20 11:39:53.398026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.858 [2024-11-20 11:39:53.399129] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:47.858 [2024-11-20 11:39:53.404569] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 407.899 ms, result 0 00:25:47.858 [2024-11-20 11:39:53.405313] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:47.858 [2024-11-20 11:39:53.425788] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:48.804  [2024-11-20T11:39:55.504Z] Copying: 29/256 [MB] (29 MBps) [2024-11-20T11:39:56.440Z] Copying: 60/256 [MB] (31 MBps) [2024-11-20T11:39:57.817Z] Copying: 87/256 [MB] (26 MBps) [2024-11-20T11:39:58.754Z] Copying: 114/256 [MB] (26 MBps) [2024-11-20T11:39:59.690Z] Copying: 145/256 [MB] (30 MBps) [2024-11-20T11:40:00.627Z] Copying: 175/256 [MB] (30 MBps) [2024-11-20T11:40:01.562Z] Copying: 205/256 [MB] (30 MBps) [2024-11-20T11:40:02.129Z] Copying: 237/256 [MB] (31 MBps) [2024-11-20T11:40:02.129Z] Copying: 256/256 [MB] (average 29 MBps)[2024-11-20 11:40:02.011131] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:56.367 [2024-11-20 11:40:02.026672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.367 [2024-11-20 11:40:02.026714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:56.367 [2024-11-20 11:40:02.026730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:56.367 [2024-11-20 11:40:02.026741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.367 [2024-11-20 11:40:02.026766] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:56.367 [2024-11-20 11:40:02.031033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.367 [2024-11-20 11:40:02.031066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:56.367 [2024-11-20 11:40:02.031078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.250 ms 00:25:56.367 [2024-11-20 11:40:02.031089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.367 [2024-11-20 11:40:02.033030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.367 [2024-11-20 11:40:02.033066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:56.367 [2024-11-20 11:40:02.033080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.913 ms 00:25:56.367 [2024-11-20 11:40:02.033092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.367 [2024-11-20 11:40:02.039653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.367 [2024-11-20 11:40:02.039685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:56.367 [2024-11-20 11:40:02.039705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.539 ms 00:25:56.367 [2024-11-20 11:40:02.039716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.367 [2024-11-20 11:40:02.045986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.367 [2024-11-20 11:40:02.046016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:56.367 [2024-11-20 11:40:02.046028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.216 ms 00:25:56.367 [2024-11-20 11:40:02.046039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.367 [2024-11-20 11:40:02.084836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.367 [2024-11-20 11:40:02.084878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:56.367 [2024-11-20 11:40:02.084893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.748 ms 00:25:56.368 [2024-11-20 11:40:02.084904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.368 [2024-11-20 11:40:02.107401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.368 [2024-11-20 11:40:02.107443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:56.368 [2024-11-20 11:40:02.107465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.434 ms 00:25:56.368 [2024-11-20 11:40:02.107488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.368 [2024-11-20 11:40:02.107633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.368 [2024-11-20 11:40:02.107647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:56.368 [2024-11-20 11:40:02.107659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:25:56.368 [2024-11-20 11:40:02.107669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.627 [2024-11-20 11:40:02.147188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.627 [2024-11-20 11:40:02.147248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:56.627 [2024-11-20 11:40:02.147264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.496 ms 00:25:56.627 [2024-11-20 11:40:02.147274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.627 [2024-11-20 11:40:02.186400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.627 [2024-11-20 11:40:02.186445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:56.627 [2024-11-20 11:40:02.186460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.057 ms 00:25:56.627 [2024-11-20 11:40:02.186481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.627 [2024-11-20 11:40:02.223484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.627 [2024-11-20 11:40:02.223525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:56.627 [2024-11-20 11:40:02.223541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.935 ms 00:25:56.627 [2024-11-20 11:40:02.223551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.627 [2024-11-20 11:40:02.262392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.627 [2024-11-20 11:40:02.262440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:56.627 [2024-11-20 11:40:02.262472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.747 ms 00:25:56.627 [2024-11-20 11:40:02.262483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.627 [2024-11-20 11:40:02.262561] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:56.627 [2024-11-20 11:40:02.262589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:56.627 [2024-11-20 11:40:02.262604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:56.627 [2024-11-20 11:40:02.262616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:56.627 [2024-11-20 11:40:02.262628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:56.627 [2024-11-20 11:40:02.262640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:56.627 [2024-11-20 11:40:02.262653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:56.627 [2024-11-20 11:40:02.262664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:56.627 [2024-11-20 11:40:02.262676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:56.627 [2024-11-20 11:40:02.262688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:56.627 [2024-11-20 11:40:02.262700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:56.627 [2024-11-20 11:40:02.262712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:56.627 [2024-11-20 11:40:02.262724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:56.627 [2024-11-20 11:40:02.262736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:56.627 [2024-11-20 11:40:02.262748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:56.627 [2024-11-20 11:40:02.262759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:56.627 [2024-11-20 11:40:02.262771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:56.627 [2024-11-20 11:40:02.262783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:56.627 [2024-11-20 11:40:02.262795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.262806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.262817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.262829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.262841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.262852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.262863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.262875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.262887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.262900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.262911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.262922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.262934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.262946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.262958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.262969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.262981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.262993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:56.628 [2024-11-20 11:40:02.263663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:56.629 [2024-11-20 11:40:02.263675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:56.629 [2024-11-20 11:40:02.263686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:56.629 [2024-11-20 11:40:02.263698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:56.629 [2024-11-20 11:40:02.263710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:56.629 [2024-11-20 11:40:02.263735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:56.629 [2024-11-20 11:40:02.263747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:56.629 [2024-11-20 11:40:02.263758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:56.629 [2024-11-20 11:40:02.263770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:56.629 [2024-11-20 11:40:02.263790] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:56.629 [2024-11-20 11:40:02.263801] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 963441e8-3606-45d4-aa8c-a0a9c7a666bb 00:25:56.629 [2024-11-20 11:40:02.263813] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:56.629 [2024-11-20 11:40:02.263824] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:56.629 [2024-11-20 11:40:02.263835] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:56.629 [2024-11-20 11:40:02.263847] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:56.629 [2024-11-20 11:40:02.263858] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:56.629 [2024-11-20 11:40:02.263869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:56.629 [2024-11-20 11:40:02.263880] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:56.629 [2024-11-20 11:40:02.263890] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:56.629 [2024-11-20 11:40:02.263900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:56.629 [2024-11-20 11:40:02.263911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.629 [2024-11-20 11:40:02.263922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:56.629 [2024-11-20 11:40:02.263938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.351 ms 00:25:56.629 [2024-11-20 11:40:02.263949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.629 [2024-11-20 11:40:02.287224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.629 [2024-11-20 11:40:02.287281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:56.629 [2024-11-20 11:40:02.287297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.249 ms 00:25:56.629 [2024-11-20 11:40:02.287309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.629 [2024-11-20 11:40:02.287968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.629 [2024-11-20 11:40:02.287994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:56.629 [2024-11-20 11:40:02.288006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:25:56.629 [2024-11-20 11:40:02.288017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.629 [2024-11-20 11:40:02.347454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.629 [2024-11-20 11:40:02.347531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:56.629 [2024-11-20 11:40:02.347547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.629 [2024-11-20 11:40:02.347558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.629 [2024-11-20 11:40:02.347657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.629 [2024-11-20 11:40:02.347676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:56.629 [2024-11-20 11:40:02.347687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.629 [2024-11-20 11:40:02.347697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.629 [2024-11-20 11:40:02.347762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.629 [2024-11-20 11:40:02.347775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:56.629 [2024-11-20 11:40:02.347786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.629 [2024-11-20 11:40:02.347796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.629 [2024-11-20 11:40:02.347816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.629 [2024-11-20 11:40:02.347826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:56.629 [2024-11-20 11:40:02.347841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.629 [2024-11-20 11:40:02.347851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.889 [2024-11-20 11:40:02.480745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.889 [2024-11-20 11:40:02.480805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:56.889 [2024-11-20 11:40:02.480822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.889 [2024-11-20 11:40:02.480833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.889 [2024-11-20 11:40:02.594082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.889 [2024-11-20 11:40:02.594152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:56.889 [2024-11-20 11:40:02.594177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.889 [2024-11-20 11:40:02.594189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.889 [2024-11-20 11:40:02.594293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.889 [2024-11-20 11:40:02.594308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:56.889 [2024-11-20 11:40:02.594321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.889 [2024-11-20 11:40:02.594332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.889 [2024-11-20 11:40:02.594364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.889 [2024-11-20 11:40:02.594376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:56.889 [2024-11-20 11:40:02.594387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.889 [2024-11-20 11:40:02.594403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.889 [2024-11-20 11:40:02.594532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.889 [2024-11-20 11:40:02.594547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:56.889 [2024-11-20 11:40:02.594560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.889 [2024-11-20 11:40:02.594570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.889 [2024-11-20 11:40:02.594613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.889 [2024-11-20 11:40:02.594628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:56.889 [2024-11-20 11:40:02.594639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.889 [2024-11-20 11:40:02.594650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.889 [2024-11-20 11:40:02.594696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.889 [2024-11-20 11:40:02.594709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:56.889 [2024-11-20 11:40:02.594720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.889 [2024-11-20 11:40:02.594743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.889 [2024-11-20 11:40:02.594792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:56.889 [2024-11-20 11:40:02.594805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:56.889 [2024-11-20 11:40:02.594816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:56.889 [2024-11-20 11:40:02.594831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.889 [2024-11-20 11:40:02.594979] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 568.299 ms, result 0 00:25:58.265 00:25:58.265 00:25:58.265 11:40:04 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:58.265 11:40:04 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=79277 00:25:58.265 11:40:04 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 79277 00:25:58.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.266 11:40:04 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79277 ']' 00:25:58.266 11:40:04 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.266 11:40:04 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:58.266 11:40:04 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.266 11:40:04 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:58.266 11:40:04 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:58.524 [2024-11-20 11:40:04.147218] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:25:58.524 [2024-11-20 11:40:04.147406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79277 ] 00:25:58.781 [2024-11-20 11:40:04.346575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.038 [2024-11-20 11:40:04.552922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.971 11:40:05 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:59.971 11:40:05 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:59.971 11:40:05 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:26:00.232 [2024-11-20 11:40:05.760212] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:00.232 [2024-11-20 11:40:05.760278] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:00.232 [2024-11-20 11:40:05.919110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.232 [2024-11-20 11:40:05.919165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:00.232 [2024-11-20 11:40:05.919183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:00.232 [2024-11-20 11:40:05.919194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.232 [2024-11-20 11:40:05.922661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.232 [2024-11-20 11:40:05.922699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:00.232 [2024-11-20 11:40:05.922731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.444 ms 00:26:00.232 [2024-11-20 11:40:05.922741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.232 [2024-11-20 11:40:05.922858] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:00.232 [2024-11-20 11:40:05.923912] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:00.232 [2024-11-20 11:40:05.923942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.232 [2024-11-20 11:40:05.923953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:00.232 [2024-11-20 11:40:05.923966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:26:00.232 [2024-11-20 11:40:05.923976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.232 [2024-11-20 11:40:05.925431] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:00.232 [2024-11-20 11:40:05.945032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.232 [2024-11-20 11:40:05.945075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:00.232 [2024-11-20 11:40:05.945107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.605 ms 00:26:00.232 [2024-11-20 11:40:05.945121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.232 [2024-11-20 11:40:05.945235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.232 [2024-11-20 11:40:05.945253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:00.232 [2024-11-20 11:40:05.945266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:00.232 [2024-11-20 11:40:05.945291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.232 [2024-11-20 11:40:05.952289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.232 [2024-11-20 11:40:05.952338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:00.232 [2024-11-20 11:40:05.952360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.941 ms 00:26:00.232 [2024-11-20 11:40:05.952382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.232 [2024-11-20 11:40:05.952566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.232 [2024-11-20 11:40:05.952613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:00.232 [2024-11-20 11:40:05.952633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:26:00.232 [2024-11-20 11:40:05.952656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.232 [2024-11-20 11:40:05.952717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.232 [2024-11-20 11:40:05.952744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:00.232 [2024-11-20 11:40:05.952765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:00.232 [2024-11-20 11:40:05.952786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.232 [2024-11-20 11:40:05.952833] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:00.232 [2024-11-20 11:40:05.958186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.232 [2024-11-20 11:40:05.958223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:00.232 [2024-11-20 11:40:05.958252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.360 ms 00:26:00.232 [2024-11-20 11:40:05.958263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.232 [2024-11-20 11:40:05.958344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.232 [2024-11-20 11:40:05.958357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:00.232 [2024-11-20 11:40:05.958370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:00.232 [2024-11-20 11:40:05.958383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.232 [2024-11-20 11:40:05.958408] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:00.232 [2024-11-20 11:40:05.958430] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:00.232 [2024-11-20 11:40:05.958477] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:00.232 [2024-11-20 11:40:05.958518] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:00.232 [2024-11-20 11:40:05.958622] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:00.232 [2024-11-20 11:40:05.958642] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:00.232 [2024-11-20 11:40:05.958661] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:00.232 [2024-11-20 11:40:05.958677] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:00.232 [2024-11-20 11:40:05.958692] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:00.232 [2024-11-20 11:40:05.958704] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:00.232 [2024-11-20 11:40:05.958717] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:00.232 [2024-11-20 11:40:05.958727] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:00.232 [2024-11-20 11:40:05.958742] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:00.232 [2024-11-20 11:40:05.958753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.232 [2024-11-20 11:40:05.958775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:00.232 [2024-11-20 11:40:05.958786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:26:00.232 [2024-11-20 11:40:05.958809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.232 [2024-11-20 11:40:05.958895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.232 [2024-11-20 11:40:05.958917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:00.232 [2024-11-20 11:40:05.958928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:00.232 [2024-11-20 11:40:05.958943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.232 [2024-11-20 11:40:05.959034] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:00.232 [2024-11-20 11:40:05.959056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:00.232 [2024-11-20 11:40:05.959067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:00.232 [2024-11-20 11:40:05.959082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.232 [2024-11-20 11:40:05.959094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:00.232 [2024-11-20 11:40:05.959109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:00.232 [2024-11-20 11:40:05.959119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:00.232 [2024-11-20 11:40:05.959138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:00.232 [2024-11-20 11:40:05.959148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:00.232 [2024-11-20 11:40:05.959162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:00.232 [2024-11-20 11:40:05.959172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:00.232 [2024-11-20 11:40:05.959186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:00.232 [2024-11-20 11:40:05.959195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:00.232 [2024-11-20 11:40:05.959210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:00.232 [2024-11-20 11:40:05.959220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:00.232 [2024-11-20 11:40:05.959234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.232 [2024-11-20 11:40:05.959244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:00.232 [2024-11-20 11:40:05.959258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:00.232 [2024-11-20 11:40:05.959268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.232 [2024-11-20 11:40:05.959282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:00.232 [2024-11-20 11:40:05.959301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:00.232 [2024-11-20 11:40:05.959314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.232 [2024-11-20 11:40:05.959324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:00.232 [2024-11-20 11:40:05.959338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:00.232 [2024-11-20 11:40:05.959347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.232 [2024-11-20 11:40:05.959358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:00.232 [2024-11-20 11:40:05.959368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:00.232 [2024-11-20 11:40:05.959380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.233 [2024-11-20 11:40:05.959390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:00.233 [2024-11-20 11:40:05.959401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:00.233 [2024-11-20 11:40:05.959411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.233 [2024-11-20 11:40:05.959423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:00.233 [2024-11-20 11:40:05.959432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:00.233 [2024-11-20 11:40:05.959446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:00.233 [2024-11-20 11:40:05.959455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:00.233 [2024-11-20 11:40:05.959467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:00.233 [2024-11-20 11:40:05.959496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:00.233 [2024-11-20 11:40:05.959518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:00.233 [2024-11-20 11:40:05.959534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:00.233 [2024-11-20 11:40:05.959558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.233 [2024-11-20 11:40:05.959573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:00.233 [2024-11-20 11:40:05.959593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:00.233 [2024-11-20 11:40:05.959609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.233 [2024-11-20 11:40:05.959625] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:00.233 [2024-11-20 11:40:05.959636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:00.233 [2024-11-20 11:40:05.959656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:00.233 [2024-11-20 11:40:05.959667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.233 [2024-11-20 11:40:05.959682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:00.233 [2024-11-20 11:40:05.959692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:00.233 [2024-11-20 11:40:05.959706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:00.233 [2024-11-20 11:40:05.959716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:00.233 [2024-11-20 11:40:05.959730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:00.233 [2024-11-20 11:40:05.959740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:00.233 [2024-11-20 11:40:05.959755] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:00.233 [2024-11-20 11:40:05.959769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:00.233 [2024-11-20 11:40:05.959789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:00.233 [2024-11-20 11:40:05.959801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:00.233 [2024-11-20 11:40:05.959818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:00.233 [2024-11-20 11:40:05.959829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:00.233 [2024-11-20 11:40:05.959845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:00.233 [2024-11-20 11:40:05.959855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:00.233 [2024-11-20 11:40:05.959870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:00.233 [2024-11-20 11:40:05.959881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:00.233 [2024-11-20 11:40:05.959896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:00.233 [2024-11-20 11:40:05.959906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:00.233 [2024-11-20 11:40:05.959921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:00.233 [2024-11-20 11:40:05.959932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:00.233 [2024-11-20 11:40:05.959947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:00.233 [2024-11-20 11:40:05.959958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:00.233 [2024-11-20 11:40:05.959973] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:00.233 [2024-11-20 11:40:05.959985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:00.233 [2024-11-20 11:40:05.960005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:00.233 [2024-11-20 11:40:05.960016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:00.233 [2024-11-20 11:40:05.960033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:00.233 [2024-11-20 11:40:05.960044] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:00.233 [2024-11-20 11:40:05.960060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.233 [2024-11-20 11:40:05.960071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:00.233 [2024-11-20 11:40:05.960087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.077 ms 00:26:00.233 [2024-11-20 11:40:05.960097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.493 [2024-11-20 11:40:06.003484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.493 [2024-11-20 11:40:06.003538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:00.493 [2024-11-20 11:40:06.003562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.296 ms 00:26:00.493 [2024-11-20 11:40:06.003573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.493 [2024-11-20 11:40:06.003749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.493 [2024-11-20 11:40:06.003763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:00.493 [2024-11-20 11:40:06.003780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:00.493 [2024-11-20 11:40:06.003790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.493 [2024-11-20 11:40:06.053216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.493 [2024-11-20 11:40:06.053269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:00.493 [2024-11-20 11:40:06.053299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.390 ms 00:26:00.493 [2024-11-20 11:40:06.053311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.493 [2024-11-20 11:40:06.053458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.493 [2024-11-20 11:40:06.053487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:00.493 [2024-11-20 11:40:06.053505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:00.493 [2024-11-20 11:40:06.053516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.493 [2024-11-20 11:40:06.053978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.493 [2024-11-20 11:40:06.053999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:00.493 [2024-11-20 11:40:06.054021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:26:00.493 [2024-11-20 11:40:06.054032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.493 [2024-11-20 11:40:06.054179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.493 [2024-11-20 11:40:06.054199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:00.493 [2024-11-20 11:40:06.054217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:26:00.493 [2024-11-20 11:40:06.054228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.493 [2024-11-20 11:40:06.077048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.493 [2024-11-20 11:40:06.077096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:00.493 [2024-11-20 11:40:06.077116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.774 ms 00:26:00.493 [2024-11-20 11:40:06.077127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.493 [2024-11-20 11:40:06.097222] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:26:00.493 [2024-11-20 11:40:06.097283] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:00.493 [2024-11-20 11:40:06.097306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.493 [2024-11-20 11:40:06.097317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:00.493 [2024-11-20 11:40:06.097334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.019 ms 00:26:00.493 [2024-11-20 11:40:06.097345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.493 [2024-11-20 11:40:06.127357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.493 [2024-11-20 11:40:06.127403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:00.493 [2024-11-20 11:40:06.127440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.916 ms 00:26:00.493 [2024-11-20 11:40:06.127452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.493 [2024-11-20 11:40:06.146267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.493 [2024-11-20 11:40:06.146309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:00.493 [2024-11-20 11:40:06.146335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.723 ms 00:26:00.493 [2024-11-20 11:40:06.146345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.493 [2024-11-20 11:40:06.165026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.493 [2024-11-20 11:40:06.165066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:00.493 [2024-11-20 11:40:06.165086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.589 ms 00:26:00.493 [2024-11-20 11:40:06.165096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.493 [2024-11-20 11:40:06.165986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.493 [2024-11-20 11:40:06.166014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:00.493 [2024-11-20 11:40:06.166032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:26:00.493 [2024-11-20 11:40:06.166043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.753 [2024-11-20 11:40:06.262795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.753 [2024-11-20 11:40:06.262856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:00.753 [2024-11-20 11:40:06.262894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.713 ms 00:26:00.753 [2024-11-20 11:40:06.262906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.753 [2024-11-20 11:40:06.274603] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:00.753 [2024-11-20 11:40:06.291418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.753 [2024-11-20 11:40:06.291499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:00.753 [2024-11-20 11:40:06.291523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.390 ms 00:26:00.753 [2024-11-20 11:40:06.291540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.753 [2024-11-20 11:40:06.291687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.753 [2024-11-20 11:40:06.291706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:00.753 [2024-11-20 11:40:06.291718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:00.753 [2024-11-20 11:40:06.291733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.753 [2024-11-20 11:40:06.291794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.753 [2024-11-20 11:40:06.291811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:00.753 [2024-11-20 11:40:06.291822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:00.753 [2024-11-20 11:40:06.291837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.753 [2024-11-20 11:40:06.291867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.753 [2024-11-20 11:40:06.291885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:00.753 [2024-11-20 11:40:06.291896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:00.753 [2024-11-20 11:40:06.291914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.753 [2024-11-20 11:40:06.291955] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:00.753 [2024-11-20 11:40:06.291977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.753 [2024-11-20 11:40:06.291988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:00.753 [2024-11-20 11:40:06.292009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:00.753 [2024-11-20 11:40:06.292019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.753 [2024-11-20 11:40:06.330034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.753 [2024-11-20 11:40:06.330084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:00.753 [2024-11-20 11:40:06.330106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.973 ms 00:26:00.753 [2024-11-20 11:40:06.330117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.753 [2024-11-20 11:40:06.330253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.753 [2024-11-20 11:40:06.330267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:00.753 [2024-11-20 11:40:06.330283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:26:00.753 [2024-11-20 11:40:06.330299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.753 [2024-11-20 11:40:06.331299] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:00.753 [2024-11-20 11:40:06.335983] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 411.816 ms, result 0 00:26:00.753 [2024-11-20 11:40:06.337459] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:00.753 Some configs were skipped because the RPC state that can call them passed over. 00:26:00.753 11:40:06 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:26:01.011 [2024-11-20 11:40:06.566162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.011 [2024-11-20 11:40:06.566231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:26:01.011 [2024-11-20 11:40:06.566249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.607 ms 00:26:01.011 [2024-11-20 11:40:06.566264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.012 [2024-11-20 11:40:06.566302] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.756 ms, result 0 00:26:01.012 true 00:26:01.012 11:40:06 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:26:01.269 [2024-11-20 11:40:06.841956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.269 [2024-11-20 11:40:06.842017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:26:01.269 [2024-11-20 11:40:06.842055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.059 ms 00:26:01.269 [2024-11-20 11:40:06.842068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.269 [2024-11-20 11:40:06.842117] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.230 ms, result 0 00:26:01.269 true 00:26:01.269 11:40:06 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 79277 00:26:01.269 11:40:06 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79277 ']' 00:26:01.269 11:40:06 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79277 00:26:01.269 11:40:06 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:26:01.269 11:40:06 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:01.269 11:40:06 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79277 00:26:01.269 11:40:06 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:01.269 11:40:06 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:01.269 11:40:06 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79277' 00:26:01.269 killing process with pid 79277 00:26:01.269 11:40:06 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79277 00:26:01.269 11:40:06 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79277 00:26:02.644 [2024-11-20 11:40:08.093979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.644 [2024-11-20 11:40:08.094033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:02.644 [2024-11-20 11:40:08.094049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:02.644 [2024-11-20 11:40:08.094062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.644 [2024-11-20 11:40:08.094086] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:02.644 [2024-11-20 11:40:08.098432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.644 [2024-11-20 11:40:08.098480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:02.644 [2024-11-20 11:40:08.098500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.325 ms 00:26:02.644 [2024-11-20 11:40:08.098510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.644 [2024-11-20 11:40:08.098782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.644 [2024-11-20 11:40:08.098802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:02.644 [2024-11-20 11:40:08.098817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:26:02.644 [2024-11-20 11:40:08.098827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.644 [2024-11-20 11:40:08.102102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.644 [2024-11-20 11:40:08.102139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:02.644 [2024-11-20 11:40:08.102157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.250 ms 00:26:02.644 [2024-11-20 11:40:08.102168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.644 [2024-11-20 11:40:08.108435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.644 [2024-11-20 11:40:08.108485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:02.644 [2024-11-20 11:40:08.108510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.226 ms 00:26:02.644 [2024-11-20 11:40:08.108521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.644 [2024-11-20 11:40:08.123990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.644 [2024-11-20 11:40:08.124029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:02.644 [2024-11-20 11:40:08.124049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.386 ms 00:26:02.644 [2024-11-20 11:40:08.124070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.644 [2024-11-20 11:40:08.134467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.644 [2024-11-20 11:40:08.134514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:02.644 [2024-11-20 11:40:08.134535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.339 ms 00:26:02.644 [2024-11-20 11:40:08.134546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.644 [2024-11-20 11:40:08.134666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.644 [2024-11-20 11:40:08.134679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:02.644 [2024-11-20 11:40:08.134692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:26:02.644 [2024-11-20 11:40:08.134702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.644 [2024-11-20 11:40:08.150939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.644 [2024-11-20 11:40:08.150977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:02.644 [2024-11-20 11:40:08.150993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.210 ms 00:26:02.644 [2024-11-20 11:40:08.151004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.644 [2024-11-20 11:40:08.167277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.644 [2024-11-20 11:40:08.167317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:02.644 [2024-11-20 11:40:08.167345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.208 ms 00:26:02.644 [2024-11-20 11:40:08.167356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.644 [2024-11-20 11:40:08.183942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.644 [2024-11-20 11:40:08.183990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:02.644 [2024-11-20 11:40:08.184015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.519 ms 00:26:02.644 [2024-11-20 11:40:08.184027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.644 [2024-11-20 11:40:08.199827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.644 [2024-11-20 11:40:08.199866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:02.644 [2024-11-20 11:40:08.199886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.703 ms 00:26:02.644 [2024-11-20 11:40:08.199896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.644 [2024-11-20 11:40:08.199953] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:02.644 [2024-11-20 11:40:08.199971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.199990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.200989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.201004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.201015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.201030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.201041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.201057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.201068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.201083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.201094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.201110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.201121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.201143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.201154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.201174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.201203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.201219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:02.644 [2024-11-20 11:40:08.201232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:02.645 [2024-11-20 11:40:08.201248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:02.645 [2024-11-20 11:40:08.201261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:02.645 [2024-11-20 11:40:08.201277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:02.645 [2024-11-20 11:40:08.201292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:02.645 [2024-11-20 11:40:08.201311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:02.645 [2024-11-20 11:40:08.201323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:02.645 [2024-11-20 11:40:08.201340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:02.645 [2024-11-20 11:40:08.201352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:02.645 [2024-11-20 11:40:08.201369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:02.645 [2024-11-20 11:40:08.201381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:02.645 [2024-11-20 11:40:08.201400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:02.645 [2024-11-20 11:40:08.201420] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:02.645 [2024-11-20 11:40:08.201448] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 963441e8-3606-45d4-aa8c-a0a9c7a666bb 00:26:02.645 [2024-11-20 11:40:08.201485] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:02.645 [2024-11-20 11:40:08.201509] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:02.645 [2024-11-20 11:40:08.201520] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:02.645 [2024-11-20 11:40:08.201536] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:02.645 [2024-11-20 11:40:08.201547] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:02.645 [2024-11-20 11:40:08.201563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:02.645 [2024-11-20 11:40:08.201574] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:02.645 [2024-11-20 11:40:08.201589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:02.645 [2024-11-20 11:40:08.201600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:02.645 [2024-11-20 11:40:08.201616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.645 [2024-11-20 11:40:08.201628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:02.645 [2024-11-20 11:40:08.201645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.666 ms 00:26:02.645 [2024-11-20 11:40:08.201656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.645 [2024-11-20 11:40:08.223485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.645 [2024-11-20 11:40:08.223522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:02.645 [2024-11-20 11:40:08.223547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.772 ms 00:26:02.645 [2024-11-20 11:40:08.223558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.645 [2024-11-20 11:40:08.224146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.645 [2024-11-20 11:40:08.224164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:02.645 [2024-11-20 11:40:08.224181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:26:02.645 [2024-11-20 11:40:08.224197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.645 [2024-11-20 11:40:08.298179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.645 [2024-11-20 11:40:08.298235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:02.645 [2024-11-20 11:40:08.298258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.645 [2024-11-20 11:40:08.298270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.645 [2024-11-20 11:40:08.298440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.645 [2024-11-20 11:40:08.298461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:02.645 [2024-11-20 11:40:08.298499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.645 [2024-11-20 11:40:08.298519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.645 [2024-11-20 11:40:08.298589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.645 [2024-11-20 11:40:08.298605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:02.645 [2024-11-20 11:40:08.298627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.645 [2024-11-20 11:40:08.298640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.645 [2024-11-20 11:40:08.298669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.645 [2024-11-20 11:40:08.298681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:02.645 [2024-11-20 11:40:08.298698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.645 [2024-11-20 11:40:08.298710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.902 [2024-11-20 11:40:08.432988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.902 [2024-11-20 11:40:08.433055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:02.902 [2024-11-20 11:40:08.433074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.902 [2024-11-20 11:40:08.433084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.902 [2024-11-20 11:40:08.541264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.902 [2024-11-20 11:40:08.541329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:02.902 [2024-11-20 11:40:08.541351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.902 [2024-11-20 11:40:08.541368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.902 [2024-11-20 11:40:08.541507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.902 [2024-11-20 11:40:08.541521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:02.902 [2024-11-20 11:40:08.541542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.902 [2024-11-20 11:40:08.541553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.902 [2024-11-20 11:40:08.541596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.902 [2024-11-20 11:40:08.541607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:02.902 [2024-11-20 11:40:08.541622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.902 [2024-11-20 11:40:08.541633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.902 [2024-11-20 11:40:08.541777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.902 [2024-11-20 11:40:08.541793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:02.902 [2024-11-20 11:40:08.541810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.902 [2024-11-20 11:40:08.541821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.902 [2024-11-20 11:40:08.541870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.902 [2024-11-20 11:40:08.541884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:02.902 [2024-11-20 11:40:08.541900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.902 [2024-11-20 11:40:08.541912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.902 [2024-11-20 11:40:08.541957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.902 [2024-11-20 11:40:08.541975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:02.902 [2024-11-20 11:40:08.541997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.902 [2024-11-20 11:40:08.542008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.902 [2024-11-20 11:40:08.542062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:02.902 [2024-11-20 11:40:08.542075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:02.903 [2024-11-20 11:40:08.542091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:02.903 [2024-11-20 11:40:08.542102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.903 [2024-11-20 11:40:08.542273] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 448.260 ms, result 0 00:26:03.839 11:40:09 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:26:03.839 11:40:09 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:04.098 [2024-11-20 11:40:09.691106] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:26:04.098 [2024-11-20 11:40:09.691288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79346 ] 00:26:04.357 [2024-11-20 11:40:09.886247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.357 [2024-11-20 11:40:10.009811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.926 [2024-11-20 11:40:10.377438] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:04.926 [2024-11-20 11:40:10.377522] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:04.926 [2024-11-20 11:40:10.540746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.926 [2024-11-20 11:40:10.540798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:04.926 [2024-11-20 11:40:10.540814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:04.926 [2024-11-20 11:40:10.540825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.926 [2024-11-20 11:40:10.544096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.926 [2024-11-20 11:40:10.544136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:04.926 [2024-11-20 11:40:10.544150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.250 ms 00:26:04.926 [2024-11-20 11:40:10.544160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.926 [2024-11-20 11:40:10.544282] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:04.926 [2024-11-20 11:40:10.545222] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:04.926 [2024-11-20 11:40:10.545252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.926 [2024-11-20 11:40:10.545264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:04.926 [2024-11-20 11:40:10.545276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:26:04.926 [2024-11-20 11:40:10.545286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.926 [2024-11-20 11:40:10.546868] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:04.926 [2024-11-20 11:40:10.566381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.926 [2024-11-20 11:40:10.566432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:04.926 [2024-11-20 11:40:10.566449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.512 ms 00:26:04.926 [2024-11-20 11:40:10.566463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.926 [2024-11-20 11:40:10.566644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.926 [2024-11-20 11:40:10.566667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:04.926 [2024-11-20 11:40:10.566680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:04.926 [2024-11-20 11:40:10.566690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.926 [2024-11-20 11:40:10.574063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.926 [2024-11-20 11:40:10.574103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:04.926 [2024-11-20 11:40:10.574117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.323 ms 00:26:04.926 [2024-11-20 11:40:10.574128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.926 [2024-11-20 11:40:10.574261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.926 [2024-11-20 11:40:10.574277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:04.926 [2024-11-20 11:40:10.574290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:26:04.926 [2024-11-20 11:40:10.574302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.926 [2024-11-20 11:40:10.574336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.926 [2024-11-20 11:40:10.574352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:04.926 [2024-11-20 11:40:10.574364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:04.926 [2024-11-20 11:40:10.574375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.926 [2024-11-20 11:40:10.574403] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:04.926 [2024-11-20 11:40:10.579546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.926 [2024-11-20 11:40:10.579579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:04.926 [2024-11-20 11:40:10.579593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.150 ms 00:26:04.926 [2024-11-20 11:40:10.579604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.926 [2024-11-20 11:40:10.579679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.926 [2024-11-20 11:40:10.579692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:04.926 [2024-11-20 11:40:10.579704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:04.926 [2024-11-20 11:40:10.579714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.926 [2024-11-20 11:40:10.579737] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:04.926 [2024-11-20 11:40:10.579763] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:04.926 [2024-11-20 11:40:10.579800] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:04.926 [2024-11-20 11:40:10.579824] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:04.926 [2024-11-20 11:40:10.579916] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:04.926 [2024-11-20 11:40:10.579933] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:04.926 [2024-11-20 11:40:10.579947] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:04.926 [2024-11-20 11:40:10.579960] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:04.926 [2024-11-20 11:40:10.579976] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:04.927 [2024-11-20 11:40:10.580004] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:04.927 [2024-11-20 11:40:10.580015] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:04.927 [2024-11-20 11:40:10.580025] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:04.927 [2024-11-20 11:40:10.580036] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:04.927 [2024-11-20 11:40:10.580047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.927 [2024-11-20 11:40:10.580058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:04.927 [2024-11-20 11:40:10.580069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:26:04.927 [2024-11-20 11:40:10.580080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.927 [2024-11-20 11:40:10.580165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.927 [2024-11-20 11:40:10.580177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:04.927 [2024-11-20 11:40:10.580193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:04.927 [2024-11-20 11:40:10.580203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.927 [2024-11-20 11:40:10.580305] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:04.927 [2024-11-20 11:40:10.580323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:04.927 [2024-11-20 11:40:10.580335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:04.927 [2024-11-20 11:40:10.580347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:04.927 [2024-11-20 11:40:10.580360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:04.927 [2024-11-20 11:40:10.580371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:04.927 [2024-11-20 11:40:10.580381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:04.927 [2024-11-20 11:40:10.580391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:04.927 [2024-11-20 11:40:10.580402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:04.927 [2024-11-20 11:40:10.580412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:04.927 [2024-11-20 11:40:10.580422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:04.927 [2024-11-20 11:40:10.580432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:04.927 [2024-11-20 11:40:10.580442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:04.927 [2024-11-20 11:40:10.580463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:04.927 [2024-11-20 11:40:10.580496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:04.927 [2024-11-20 11:40:10.580508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:04.927 [2024-11-20 11:40:10.580518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:04.927 [2024-11-20 11:40:10.580528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:04.927 [2024-11-20 11:40:10.580538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:04.927 [2024-11-20 11:40:10.580549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:04.927 [2024-11-20 11:40:10.580559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:04.927 [2024-11-20 11:40:10.580569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:04.927 [2024-11-20 11:40:10.580579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:04.927 [2024-11-20 11:40:10.580589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:04.927 [2024-11-20 11:40:10.580599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:04.927 [2024-11-20 11:40:10.580609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:04.927 [2024-11-20 11:40:10.580620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:04.927 [2024-11-20 11:40:10.580630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:04.927 [2024-11-20 11:40:10.580640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:04.927 [2024-11-20 11:40:10.580651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:04.927 [2024-11-20 11:40:10.580661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:04.927 [2024-11-20 11:40:10.580671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:04.927 [2024-11-20 11:40:10.580681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:04.927 [2024-11-20 11:40:10.580690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:04.927 [2024-11-20 11:40:10.580700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:04.927 [2024-11-20 11:40:10.580710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:04.927 [2024-11-20 11:40:10.580722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:04.927 [2024-11-20 11:40:10.580732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:04.927 [2024-11-20 11:40:10.580742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:04.927 [2024-11-20 11:40:10.580752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:04.927 [2024-11-20 11:40:10.580763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:04.927 [2024-11-20 11:40:10.580773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:04.927 [2024-11-20 11:40:10.580783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:04.927 [2024-11-20 11:40:10.580793] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:04.927 [2024-11-20 11:40:10.580804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:04.927 [2024-11-20 11:40:10.580815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:04.927 [2024-11-20 11:40:10.580829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:04.927 [2024-11-20 11:40:10.580841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:04.927 [2024-11-20 11:40:10.580852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:04.927 [2024-11-20 11:40:10.580862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:04.927 [2024-11-20 11:40:10.580872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:04.927 [2024-11-20 11:40:10.580882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:04.927 [2024-11-20 11:40:10.580892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:04.927 [2024-11-20 11:40:10.580905] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:04.927 [2024-11-20 11:40:10.580919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:04.927 [2024-11-20 11:40:10.580931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:04.927 [2024-11-20 11:40:10.580943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:04.927 [2024-11-20 11:40:10.580954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:04.927 [2024-11-20 11:40:10.580965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:04.927 [2024-11-20 11:40:10.580976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:04.927 [2024-11-20 11:40:10.580987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:04.927 [2024-11-20 11:40:10.580999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:04.927 [2024-11-20 11:40:10.581010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:04.927 [2024-11-20 11:40:10.581022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:04.927 [2024-11-20 11:40:10.581033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:04.927 [2024-11-20 11:40:10.581044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:04.927 [2024-11-20 11:40:10.581055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:04.927 [2024-11-20 11:40:10.581066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:04.927 [2024-11-20 11:40:10.581078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:04.927 [2024-11-20 11:40:10.581090] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:04.927 [2024-11-20 11:40:10.581102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:04.927 [2024-11-20 11:40:10.581114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:04.927 [2024-11-20 11:40:10.581125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:04.927 [2024-11-20 11:40:10.581150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:04.927 [2024-11-20 11:40:10.581165] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:04.927 [2024-11-20 11:40:10.581181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.927 [2024-11-20 11:40:10.581196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:04.927 [2024-11-20 11:40:10.581214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:26:04.927 [2024-11-20 11:40:10.581228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.927 [2024-11-20 11:40:10.620686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.927 [2024-11-20 11:40:10.620741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:04.927 [2024-11-20 11:40:10.620758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.386 ms 00:26:04.927 [2024-11-20 11:40:10.620770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.927 [2024-11-20 11:40:10.620940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.927 [2024-11-20 11:40:10.620963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:04.927 [2024-11-20 11:40:10.620976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:04.927 [2024-11-20 11:40:10.620985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.928 [2024-11-20 11:40:10.678905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.928 [2024-11-20 11:40:10.678953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:04.928 [2024-11-20 11:40:10.678969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.893 ms 00:26:04.928 [2024-11-20 11:40:10.678983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.928 [2024-11-20 11:40:10.679121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.928 [2024-11-20 11:40:10.679140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:04.928 [2024-11-20 11:40:10.679152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:04.928 [2024-11-20 11:40:10.679162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.928 [2024-11-20 11:40:10.679607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.928 [2024-11-20 11:40:10.679626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:04.928 [2024-11-20 11:40:10.679637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:26:04.928 [2024-11-20 11:40:10.679653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:04.928 [2024-11-20 11:40:10.679773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:04.928 [2024-11-20 11:40:10.679787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:04.928 [2024-11-20 11:40:10.679798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:26:04.928 [2024-11-20 11:40:10.679808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.187 [2024-11-20 11:40:10.699428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.188 [2024-11-20 11:40:10.699479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:05.188 [2024-11-20 11:40:10.699511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.597 ms 00:26:05.188 [2024-11-20 11:40:10.699522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.188 [2024-11-20 11:40:10.719516] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:26:05.188 [2024-11-20 11:40:10.719558] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:05.188 [2024-11-20 11:40:10.719575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.188 [2024-11-20 11:40:10.719587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:05.188 [2024-11-20 11:40:10.719598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.909 ms 00:26:05.188 [2024-11-20 11:40:10.719608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.188 [2024-11-20 11:40:10.751315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.188 [2024-11-20 11:40:10.751375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:05.188 [2024-11-20 11:40:10.751391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.576 ms 00:26:05.188 [2024-11-20 11:40:10.751402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.188 [2024-11-20 11:40:10.771096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.188 [2024-11-20 11:40:10.771158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:05.188 [2024-11-20 11:40:10.771173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.574 ms 00:26:05.188 [2024-11-20 11:40:10.771184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.188 [2024-11-20 11:40:10.790839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.188 [2024-11-20 11:40:10.790901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:05.188 [2024-11-20 11:40:10.790917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.558 ms 00:26:05.188 [2024-11-20 11:40:10.790929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.188 [2024-11-20 11:40:10.791814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.188 [2024-11-20 11:40:10.791844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:05.188 [2024-11-20 11:40:10.791858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:26:05.188 [2024-11-20 11:40:10.791870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.188 [2024-11-20 11:40:10.881440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.188 [2024-11-20 11:40:10.881531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:05.188 [2024-11-20 11:40:10.881549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.538 ms 00:26:05.188 [2024-11-20 11:40:10.881560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.188 [2024-11-20 11:40:10.893306] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:05.188 [2024-11-20 11:40:10.909886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.188 [2024-11-20 11:40:10.909947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:05.188 [2024-11-20 11:40:10.909965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.195 ms 00:26:05.188 [2024-11-20 11:40:10.909977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.188 [2024-11-20 11:40:10.910136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.188 [2024-11-20 11:40:10.910151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:05.188 [2024-11-20 11:40:10.910163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:05.188 [2024-11-20 11:40:10.910173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.188 [2024-11-20 11:40:10.910228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.188 [2024-11-20 11:40:10.910240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:05.188 [2024-11-20 11:40:10.910250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:05.188 [2024-11-20 11:40:10.910260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.188 [2024-11-20 11:40:10.910287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.188 [2024-11-20 11:40:10.910302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:05.188 [2024-11-20 11:40:10.910312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:05.188 [2024-11-20 11:40:10.910322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.188 [2024-11-20 11:40:10.910359] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:05.188 [2024-11-20 11:40:10.910378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.188 [2024-11-20 11:40:10.910389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:05.188 [2024-11-20 11:40:10.910399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:05.188 [2024-11-20 11:40:10.910409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.448 [2024-11-20 11:40:10.948101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.448 [2024-11-20 11:40:10.948151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:05.448 [2024-11-20 11:40:10.948167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.669 ms 00:26:05.448 [2024-11-20 11:40:10.948195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.448 [2024-11-20 11:40:10.948350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.448 [2024-11-20 11:40:10.948369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:05.448 [2024-11-20 11:40:10.948382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:05.448 [2024-11-20 11:40:10.948393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.448 [2024-11-20 11:40:10.949598] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:05.448 [2024-11-20 11:40:10.954750] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 408.506 ms, result 0 00:26:05.448 [2024-11-20 11:40:10.955468] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:05.448 [2024-11-20 11:40:10.975401] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:06.385  [2024-11-20T11:40:13.081Z] Copying: 33/256 [MB] (33 MBps) [2024-11-20T11:40:14.019Z] Copying: 63/256 [MB] (30 MBps) [2024-11-20T11:40:15.396Z] Copying: 93/256 [MB] (29 MBps) [2024-11-20T11:40:16.333Z] Copying: 122/256 [MB] (28 MBps) [2024-11-20T11:40:17.269Z] Copying: 150/256 [MB] (28 MBps) [2024-11-20T11:40:18.205Z] Copying: 179/256 [MB] (28 MBps) [2024-11-20T11:40:19.141Z] Copying: 208/256 [MB] (29 MBps) [2024-11-20T11:40:19.714Z] Copying: 237/256 [MB] (28 MBps) [2024-11-20T11:40:19.714Z] Copying: 256/256 [MB] (average 29 MBps)[2024-11-20 11:40:19.626099] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:13.952 [2024-11-20 11:40:19.641697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.952 [2024-11-20 11:40:19.641744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:13.952 [2024-11-20 11:40:19.641760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:13.952 [2024-11-20 11:40:19.641783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.952 [2024-11-20 11:40:19.641809] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:13.952 [2024-11-20 11:40:19.646090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.952 [2024-11-20 11:40:19.646121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:13.952 [2024-11-20 11:40:19.646133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.265 ms 00:26:13.952 [2024-11-20 11:40:19.646143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.952 [2024-11-20 11:40:19.646377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.952 [2024-11-20 11:40:19.646390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:13.952 [2024-11-20 11:40:19.646401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:26:13.952 [2024-11-20 11:40:19.646411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.952 [2024-11-20 11:40:19.649356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.952 [2024-11-20 11:40:19.649388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:13.952 [2024-11-20 11:40:19.649400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.929 ms 00:26:13.952 [2024-11-20 11:40:19.649428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.952 [2024-11-20 11:40:19.655255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.952 [2024-11-20 11:40:19.655285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:13.952 [2024-11-20 11:40:19.655298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.806 ms 00:26:13.952 [2024-11-20 11:40:19.655308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.952 [2024-11-20 11:40:19.692181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.952 [2024-11-20 11:40:19.692222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:13.952 [2024-11-20 11:40:19.692252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.807 ms 00:26:13.952 [2024-11-20 11:40:19.692262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.213 [2024-11-20 11:40:19.713557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.213 [2024-11-20 11:40:19.713609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:14.213 [2024-11-20 11:40:19.713623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.236 ms 00:26:14.213 [2024-11-20 11:40:19.713641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.213 [2024-11-20 11:40:19.713808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.213 [2024-11-20 11:40:19.713822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:14.213 [2024-11-20 11:40:19.713834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:26:14.213 [2024-11-20 11:40:19.713845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.213 [2024-11-20 11:40:19.750974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.213 [2024-11-20 11:40:19.751011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:14.213 [2024-11-20 11:40:19.751041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.095 ms 00:26:14.213 [2024-11-20 11:40:19.751051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.213 [2024-11-20 11:40:19.788391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.213 [2024-11-20 11:40:19.788432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:14.213 [2024-11-20 11:40:19.788447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.284 ms 00:26:14.213 [2024-11-20 11:40:19.788458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.213 [2024-11-20 11:40:19.824950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.213 [2024-11-20 11:40:19.825005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:14.213 [2024-11-20 11:40:19.825035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.426 ms 00:26:14.213 [2024-11-20 11:40:19.825045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.213 [2024-11-20 11:40:19.861798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.213 [2024-11-20 11:40:19.861839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:14.213 [2024-11-20 11:40:19.861870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.619 ms 00:26:14.213 [2024-11-20 11:40:19.861880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.213 [2024-11-20 11:40:19.861936] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:14.213 [2024-11-20 11:40:19.861954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:14.213 [2024-11-20 11:40:19.861967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:14.213 [2024-11-20 11:40:19.861980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:14.213 [2024-11-20 11:40:19.861991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:14.213 [2024-11-20 11:40:19.862003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:14.213 [2024-11-20 11:40:19.862014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:14.213 [2024-11-20 11:40:19.862025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:14.213 [2024-11-20 11:40:19.862036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:14.213 [2024-11-20 11:40:19.862046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:14.213 [2024-11-20 11:40:19.862057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:14.213 [2024-11-20 11:40:19.862068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:14.213 [2024-11-20 11:40:19.862079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:14.214 [2024-11-20 11:40:19.862952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:14.215 [2024-11-20 11:40:19.862963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:14.215 [2024-11-20 11:40:19.862991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:14.215 [2024-11-20 11:40:19.863001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:14.215 [2024-11-20 11:40:19.863012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:14.215 [2024-11-20 11:40:19.863023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:14.215 [2024-11-20 11:40:19.863033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:14.215 [2024-11-20 11:40:19.863051] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:14.215 [2024-11-20 11:40:19.863061] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 963441e8-3606-45d4-aa8c-a0a9c7a666bb 00:26:14.215 [2024-11-20 11:40:19.863073] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:14.215 [2024-11-20 11:40:19.863083] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:14.215 [2024-11-20 11:40:19.863093] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:14.215 [2024-11-20 11:40:19.863103] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:14.215 [2024-11-20 11:40:19.863113] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:14.215 [2024-11-20 11:40:19.863124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:14.215 [2024-11-20 11:40:19.863134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:14.215 [2024-11-20 11:40:19.863143] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:14.215 [2024-11-20 11:40:19.863152] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:14.215 [2024-11-20 11:40:19.863161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.215 [2024-11-20 11:40:19.863180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:14.215 [2024-11-20 11:40:19.863191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:26:14.215 [2024-11-20 11:40:19.863201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.215 [2024-11-20 11:40:19.883455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.215 [2024-11-20 11:40:19.883516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:14.215 [2024-11-20 11:40:19.883529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.232 ms 00:26:14.215 [2024-11-20 11:40:19.883540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.215 [2024-11-20 11:40:19.884154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.215 [2024-11-20 11:40:19.884172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:14.215 [2024-11-20 11:40:19.884184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:26:14.215 [2024-11-20 11:40:19.884194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.215 [2024-11-20 11:40:19.940754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.215 [2024-11-20 11:40:19.940812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:14.215 [2024-11-20 11:40:19.940827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.215 [2024-11-20 11:40:19.940838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.215 [2024-11-20 11:40:19.940962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.215 [2024-11-20 11:40:19.940974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:14.215 [2024-11-20 11:40:19.940985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.215 [2024-11-20 11:40:19.941004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.215 [2024-11-20 11:40:19.941063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.215 [2024-11-20 11:40:19.941077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:14.215 [2024-11-20 11:40:19.941087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.215 [2024-11-20 11:40:19.941097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.215 [2024-11-20 11:40:19.941117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.215 [2024-11-20 11:40:19.941143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:14.215 [2024-11-20 11:40:19.941153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.215 [2024-11-20 11:40:19.941164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.474 [2024-11-20 11:40:20.072968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.474 [2024-11-20 11:40:20.073052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:14.474 [2024-11-20 11:40:20.073069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.474 [2024-11-20 11:40:20.073081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.474 [2024-11-20 11:40:20.187148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.474 [2024-11-20 11:40:20.187232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:14.474 [2024-11-20 11:40:20.187249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.474 [2024-11-20 11:40:20.187261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.474 [2024-11-20 11:40:20.187363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.474 [2024-11-20 11:40:20.187377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:14.474 [2024-11-20 11:40:20.187388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.474 [2024-11-20 11:40:20.187399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.474 [2024-11-20 11:40:20.187431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.474 [2024-11-20 11:40:20.187444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:14.474 [2024-11-20 11:40:20.187460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.474 [2024-11-20 11:40:20.187490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.474 [2024-11-20 11:40:20.187630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.474 [2024-11-20 11:40:20.187645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:14.474 [2024-11-20 11:40:20.187657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.474 [2024-11-20 11:40:20.187668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.474 [2024-11-20 11:40:20.187709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.474 [2024-11-20 11:40:20.187722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:14.474 [2024-11-20 11:40:20.187733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.474 [2024-11-20 11:40:20.187749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.474 [2024-11-20 11:40:20.187791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.474 [2024-11-20 11:40:20.187803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:14.474 [2024-11-20 11:40:20.187814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.474 [2024-11-20 11:40:20.187825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.474 [2024-11-20 11:40:20.187872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.474 [2024-11-20 11:40:20.187885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:14.474 [2024-11-20 11:40:20.187923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.474 [2024-11-20 11:40:20.187935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.474 [2024-11-20 11:40:20.188099] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 546.389 ms, result 0 00:26:15.867 00:26:15.867 00:26:15.867 11:40:21 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:26:15.867 11:40:21 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:26:16.435 11:40:21 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:16.435 [2024-11-20 11:40:22.068793] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:26:16.435 [2024-11-20 11:40:22.068975] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79473 ] 00:26:16.694 [2024-11-20 11:40:22.261815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.694 [2024-11-20 11:40:22.377758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.263 [2024-11-20 11:40:22.746696] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:17.263 [2024-11-20 11:40:22.746774] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:17.263 [2024-11-20 11:40:22.910346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.263 [2024-11-20 11:40:22.910406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:17.263 [2024-11-20 11:40:22.910423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:17.263 [2024-11-20 11:40:22.910433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.263 [2024-11-20 11:40:22.913850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.263 [2024-11-20 11:40:22.913891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:17.263 [2024-11-20 11:40:22.913905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.395 ms 00:26:17.263 [2024-11-20 11:40:22.913916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.263 [2024-11-20 11:40:22.914029] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:17.263 [2024-11-20 11:40:22.915050] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:17.263 [2024-11-20 11:40:22.915081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.263 [2024-11-20 11:40:22.915093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:17.263 [2024-11-20 11:40:22.915105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.059 ms 00:26:17.263 [2024-11-20 11:40:22.915116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.263 [2024-11-20 11:40:22.916655] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:17.263 [2024-11-20 11:40:22.938729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.263 [2024-11-20 11:40:22.938795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:17.263 [2024-11-20 11:40:22.938812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.073 ms 00:26:17.263 [2024-11-20 11:40:22.938825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.263 [2024-11-20 11:40:22.938943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.263 [2024-11-20 11:40:22.938959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:17.263 [2024-11-20 11:40:22.938972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:26:17.263 [2024-11-20 11:40:22.938983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.263 [2024-11-20 11:40:22.946012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.263 [2024-11-20 11:40:22.946042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:17.263 [2024-11-20 11:40:22.946055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.981 ms 00:26:17.263 [2024-11-20 11:40:22.946065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.263 [2024-11-20 11:40:22.946175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.263 [2024-11-20 11:40:22.946192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:17.263 [2024-11-20 11:40:22.946204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:17.263 [2024-11-20 11:40:22.946215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.263 [2024-11-20 11:40:22.946248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.263 [2024-11-20 11:40:22.946263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:17.263 [2024-11-20 11:40:22.946275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:17.263 [2024-11-20 11:40:22.946286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.263 [2024-11-20 11:40:22.946314] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:17.263 [2024-11-20 11:40:22.951552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.263 [2024-11-20 11:40:22.951597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:17.263 [2024-11-20 11:40:22.951611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.246 ms 00:26:17.263 [2024-11-20 11:40:22.951622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.263 [2024-11-20 11:40:22.951701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.263 [2024-11-20 11:40:22.951714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:17.263 [2024-11-20 11:40:22.951726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:17.263 [2024-11-20 11:40:22.951737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.263 [2024-11-20 11:40:22.951762] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:17.263 [2024-11-20 11:40:22.951790] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:17.263 [2024-11-20 11:40:22.951829] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:17.263 [2024-11-20 11:40:22.951849] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:17.263 [2024-11-20 11:40:22.951969] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:17.263 [2024-11-20 11:40:22.951984] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:17.263 [2024-11-20 11:40:22.951999] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:17.263 [2024-11-20 11:40:22.952014] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:17.263 [2024-11-20 11:40:22.952032] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:17.263 [2024-11-20 11:40:22.952044] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:17.263 [2024-11-20 11:40:22.952056] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:17.263 [2024-11-20 11:40:22.952067] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:17.263 [2024-11-20 11:40:22.952079] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:17.263 [2024-11-20 11:40:22.952090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.264 [2024-11-20 11:40:22.952102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:17.264 [2024-11-20 11:40:22.952114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:26:17.264 [2024-11-20 11:40:22.952125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.264 [2024-11-20 11:40:22.952215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.264 [2024-11-20 11:40:22.952228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:17.264 [2024-11-20 11:40:22.952243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:26:17.264 [2024-11-20 11:40:22.952254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.264 [2024-11-20 11:40:22.952362] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:17.264 [2024-11-20 11:40:22.952382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:17.264 [2024-11-20 11:40:22.952394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:17.264 [2024-11-20 11:40:22.952407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.264 [2024-11-20 11:40:22.952419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:17.264 [2024-11-20 11:40:22.952430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:17.264 [2024-11-20 11:40:22.952442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:17.264 [2024-11-20 11:40:22.952453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:17.264 [2024-11-20 11:40:22.952464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:17.264 [2024-11-20 11:40:22.952486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:17.264 [2024-11-20 11:40:22.952498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:17.264 [2024-11-20 11:40:22.952508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:17.264 [2024-11-20 11:40:22.952519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:17.264 [2024-11-20 11:40:22.952541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:17.264 [2024-11-20 11:40:22.952553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:17.264 [2024-11-20 11:40:22.952564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.264 [2024-11-20 11:40:22.952575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:17.264 [2024-11-20 11:40:22.952586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:17.264 [2024-11-20 11:40:22.952597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.264 [2024-11-20 11:40:22.952608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:17.264 [2024-11-20 11:40:22.952619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:17.264 [2024-11-20 11:40:22.952629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.264 [2024-11-20 11:40:22.952640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:17.264 [2024-11-20 11:40:22.952651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:17.264 [2024-11-20 11:40:22.952661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.264 [2024-11-20 11:40:22.952672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:17.264 [2024-11-20 11:40:22.952683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:17.264 [2024-11-20 11:40:22.952693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.264 [2024-11-20 11:40:22.952704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:17.264 [2024-11-20 11:40:22.952714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:17.264 [2024-11-20 11:40:22.952725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:17.264 [2024-11-20 11:40:22.952735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:17.264 [2024-11-20 11:40:22.952746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:17.264 [2024-11-20 11:40:22.952756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:17.264 [2024-11-20 11:40:22.952767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:17.264 [2024-11-20 11:40:22.952778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:17.264 [2024-11-20 11:40:22.952788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:17.264 [2024-11-20 11:40:22.952799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:17.264 [2024-11-20 11:40:22.952811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:17.264 [2024-11-20 11:40:22.952821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.264 [2024-11-20 11:40:22.952832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:17.264 [2024-11-20 11:40:22.952842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:17.264 [2024-11-20 11:40:22.952853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.264 [2024-11-20 11:40:22.952863] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:17.264 [2024-11-20 11:40:22.952875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:17.264 [2024-11-20 11:40:22.952886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:17.264 [2024-11-20 11:40:22.952901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:17.264 [2024-11-20 11:40:22.952913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:17.264 [2024-11-20 11:40:22.952924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:17.264 [2024-11-20 11:40:22.952934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:17.264 [2024-11-20 11:40:22.952945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:17.264 [2024-11-20 11:40:22.952955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:17.264 [2024-11-20 11:40:22.952967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:17.264 [2024-11-20 11:40:22.952979] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:17.264 [2024-11-20 11:40:22.952993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:17.264 [2024-11-20 11:40:22.953006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:17.264 [2024-11-20 11:40:22.953018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:17.264 [2024-11-20 11:40:22.953031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:17.264 [2024-11-20 11:40:22.953043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:17.264 [2024-11-20 11:40:22.953055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:17.264 [2024-11-20 11:40:22.953067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:17.264 [2024-11-20 11:40:22.953079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:17.264 [2024-11-20 11:40:22.953092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:17.264 [2024-11-20 11:40:22.953103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:17.264 [2024-11-20 11:40:22.953116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:17.264 [2024-11-20 11:40:22.953137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:17.264 [2024-11-20 11:40:22.953149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:17.264 [2024-11-20 11:40:22.953160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:17.264 [2024-11-20 11:40:22.953173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:17.264 [2024-11-20 11:40:22.953185] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:17.264 [2024-11-20 11:40:22.953198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:17.264 [2024-11-20 11:40:22.953211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:17.264 [2024-11-20 11:40:22.953223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:17.264 [2024-11-20 11:40:22.953235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:17.264 [2024-11-20 11:40:22.953248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:17.264 [2024-11-20 11:40:22.953261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.264 [2024-11-20 11:40:22.953273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:17.264 [2024-11-20 11:40:22.953290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:26:17.264 [2024-11-20 11:40:22.953302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.264 [2024-11-20 11:40:22.996633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.264 [2024-11-20 11:40:22.996685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:17.264 [2024-11-20 11:40:22.996703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.265 ms 00:26:17.264 [2024-11-20 11:40:22.996715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.264 [2024-11-20 11:40:22.996891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.264 [2024-11-20 11:40:22.996909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:17.264 [2024-11-20 11:40:22.996921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:17.264 [2024-11-20 11:40:22.996931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.523 [2024-11-20 11:40:23.065928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.523 [2024-11-20 11:40:23.065986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:17.523 [2024-11-20 11:40:23.066004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.970 ms 00:26:17.523 [2024-11-20 11:40:23.066021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.523 [2024-11-20 11:40:23.066172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.523 [2024-11-20 11:40:23.066187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:17.523 [2024-11-20 11:40:23.066201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:17.523 [2024-11-20 11:40:23.066213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.523 [2024-11-20 11:40:23.066681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.523 [2024-11-20 11:40:23.066706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:17.523 [2024-11-20 11:40:23.066719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:26:17.523 [2024-11-20 11:40:23.066738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.523 [2024-11-20 11:40:23.066872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.523 [2024-11-20 11:40:23.066895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:17.523 [2024-11-20 11:40:23.066908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:26:17.523 [2024-11-20 11:40:23.066920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.523 [2024-11-20 11:40:23.089250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.523 [2024-11-20 11:40:23.089303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:17.523 [2024-11-20 11:40:23.089320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.302 ms 00:26:17.523 [2024-11-20 11:40:23.089333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.523 [2024-11-20 11:40:23.111631] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:26:17.523 [2024-11-20 11:40:23.111684] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:17.523 [2024-11-20 11:40:23.111703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.523 [2024-11-20 11:40:23.111733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:17.523 [2024-11-20 11:40:23.111747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.195 ms 00:26:17.523 [2024-11-20 11:40:23.111760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.523 [2024-11-20 11:40:23.146444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.523 [2024-11-20 11:40:23.146535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:17.523 [2024-11-20 11:40:23.146552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.568 ms 00:26:17.523 [2024-11-20 11:40:23.146597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.523 [2024-11-20 11:40:23.167684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.523 [2024-11-20 11:40:23.167728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:17.523 [2024-11-20 11:40:23.167743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.978 ms 00:26:17.523 [2024-11-20 11:40:23.167753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.523 [2024-11-20 11:40:23.188245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.523 [2024-11-20 11:40:23.188290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:17.523 [2024-11-20 11:40:23.188305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.367 ms 00:26:17.523 [2024-11-20 11:40:23.188317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.523 [2024-11-20 11:40:23.189180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.523 [2024-11-20 11:40:23.189210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:17.523 [2024-11-20 11:40:23.189224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:26:17.523 [2024-11-20 11:40:23.189237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.783 [2024-11-20 11:40:23.284861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.783 [2024-11-20 11:40:23.284930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:17.783 [2024-11-20 11:40:23.284948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.589 ms 00:26:17.783 [2024-11-20 11:40:23.284961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.783 [2024-11-20 11:40:23.297701] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:17.783 [2024-11-20 11:40:23.315489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.783 [2024-11-20 11:40:23.315552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:17.783 [2024-11-20 11:40:23.315568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.364 ms 00:26:17.783 [2024-11-20 11:40:23.315596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.783 [2024-11-20 11:40:23.315731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.783 [2024-11-20 11:40:23.315746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:17.783 [2024-11-20 11:40:23.315760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:17.783 [2024-11-20 11:40:23.315771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.783 [2024-11-20 11:40:23.315828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.783 [2024-11-20 11:40:23.315840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:17.783 [2024-11-20 11:40:23.315852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:17.783 [2024-11-20 11:40:23.315863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.783 [2024-11-20 11:40:23.315892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.783 [2024-11-20 11:40:23.315908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:17.783 [2024-11-20 11:40:23.315919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:17.783 [2024-11-20 11:40:23.315930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.783 [2024-11-20 11:40:23.315971] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:17.783 [2024-11-20 11:40:23.315984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.783 [2024-11-20 11:40:23.315996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:17.783 [2024-11-20 11:40:23.316007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:17.783 [2024-11-20 11:40:23.316017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.783 [2024-11-20 11:40:23.356529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.783 [2024-11-20 11:40:23.356592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:17.783 [2024-11-20 11:40:23.356609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.487 ms 00:26:17.783 [2024-11-20 11:40:23.356621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.783 [2024-11-20 11:40:23.356750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.783 [2024-11-20 11:40:23.356766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:17.783 [2024-11-20 11:40:23.356778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:17.783 [2024-11-20 11:40:23.356789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.783 [2024-11-20 11:40:23.357829] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:17.783 [2024-11-20 11:40:23.362617] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 447.135 ms, result 0 00:26:17.783 [2024-11-20 11:40:23.363447] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:17.784 [2024-11-20 11:40:23.383185] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:17.784  [2024-11-20T11:40:23.546Z] Copying: 4096/4096 [kB] (average 30 MBps)[2024-11-20 11:40:23.518829] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:17.784 [2024-11-20 11:40:23.534815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.784 [2024-11-20 11:40:23.534902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:17.784 [2024-11-20 11:40:23.534929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:17.784 [2024-11-20 11:40:23.534956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.784 [2024-11-20 11:40:23.534998] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:17.784 [2024-11-20 11:40:23.539279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.784 [2024-11-20 11:40:23.539326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:17.784 [2024-11-20 11:40:23.539343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.252 ms 00:26:17.784 [2024-11-20 11:40:23.539359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:17.784 [2024-11-20 11:40:23.541409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:17.784 [2024-11-20 11:40:23.541483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:17.784 [2024-11-20 11:40:23.541510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.010 ms 00:26:17.784 [2024-11-20 11:40:23.541531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.044 [2024-11-20 11:40:23.545767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.044 [2024-11-20 11:40:23.545821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:18.044 [2024-11-20 11:40:23.545841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.199 ms 00:26:18.044 [2024-11-20 11:40:23.545857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.044 [2024-11-20 11:40:23.553545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.044 [2024-11-20 11:40:23.553596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:18.044 [2024-11-20 11:40:23.553612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.639 ms 00:26:18.044 [2024-11-20 11:40:23.553624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.044 [2024-11-20 11:40:23.594236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.044 [2024-11-20 11:40:23.594313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:18.044 [2024-11-20 11:40:23.594336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.546 ms 00:26:18.044 [2024-11-20 11:40:23.594353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.044 [2024-11-20 11:40:23.617302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.044 [2024-11-20 11:40:23.617368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:18.044 [2024-11-20 11:40:23.617392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.838 ms 00:26:18.044 [2024-11-20 11:40:23.617410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.044 [2024-11-20 11:40:23.617593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.044 [2024-11-20 11:40:23.617611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:18.044 [2024-11-20 11:40:23.617625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:26:18.044 [2024-11-20 11:40:23.617637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.044 [2024-11-20 11:40:23.659161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.044 [2024-11-20 11:40:23.659208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:18.044 [2024-11-20 11:40:23.659223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.485 ms 00:26:18.044 [2024-11-20 11:40:23.659233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.044 [2024-11-20 11:40:23.697780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.044 [2024-11-20 11:40:23.697826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:18.044 [2024-11-20 11:40:23.697842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.484 ms 00:26:18.044 [2024-11-20 11:40:23.697853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.044 [2024-11-20 11:40:23.735787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.044 [2024-11-20 11:40:23.735829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:18.044 [2024-11-20 11:40:23.735842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.844 ms 00:26:18.044 [2024-11-20 11:40:23.735853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.044 [2024-11-20 11:40:23.772314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.044 [2024-11-20 11:40:23.772368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:18.044 [2024-11-20 11:40:23.772382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.376 ms 00:26:18.044 [2024-11-20 11:40:23.772392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.044 [2024-11-20 11:40:23.772447] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:18.044 [2024-11-20 11:40:23.772464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:18.044 [2024-11-20 11:40:23.772488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:18.044 [2024-11-20 11:40:23.772500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:18.044 [2024-11-20 11:40:23.772511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:18.044 [2024-11-20 11:40:23.772522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:18.044 [2024-11-20 11:40:23.772533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:18.044 [2024-11-20 11:40:23.772544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:18.044 [2024-11-20 11:40:23.772555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:18.044 [2024-11-20 11:40:23.772565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:18.044 [2024-11-20 11:40:23.772577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:18.044 [2024-11-20 11:40:23.772588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:18.044 [2024-11-20 11:40:23.772599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:18.044 [2024-11-20 11:40:23.772609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:18.044 [2024-11-20 11:40:23.772620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:18.044 [2024-11-20 11:40:23.772631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:18.044 [2024-11-20 11:40:23.772641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:18.044 [2024-11-20 11:40:23.772652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:18.044 [2024-11-20 11:40:23.772663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:18.044 [2024-11-20 11:40:23.772674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.772999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:18.045 [2024-11-20 11:40:23.773565] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:18.045 [2024-11-20 11:40:23.773575] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 963441e8-3606-45d4-aa8c-a0a9c7a666bb 00:26:18.045 [2024-11-20 11:40:23.773586] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:18.045 [2024-11-20 11:40:23.773596] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:18.045 [2024-11-20 11:40:23.773606] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:18.045 [2024-11-20 11:40:23.773617] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:18.045 [2024-11-20 11:40:23.773626] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:18.045 [2024-11-20 11:40:23.773638] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:18.045 [2024-11-20 11:40:23.773647] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:18.045 [2024-11-20 11:40:23.773656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:18.046 [2024-11-20 11:40:23.773665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:18.046 [2024-11-20 11:40:23.773675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.046 [2024-11-20 11:40:23.773689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:18.046 [2024-11-20 11:40:23.773701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:26:18.046 [2024-11-20 11:40:23.773711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.046 [2024-11-20 11:40:23.794536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.046 [2024-11-20 11:40:23.794572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:18.046 [2024-11-20 11:40:23.794585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.803 ms 00:26:18.046 [2024-11-20 11:40:23.794595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.046 [2024-11-20 11:40:23.795219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.046 [2024-11-20 11:40:23.795239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:18.046 [2024-11-20 11:40:23.795252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:26:18.046 [2024-11-20 11:40:23.795263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.305 [2024-11-20 11:40:23.852839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:18.305 [2024-11-20 11:40:23.852885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:18.305 [2024-11-20 11:40:23.852899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:18.305 [2024-11-20 11:40:23.852909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.305 [2024-11-20 11:40:23.853022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:18.305 [2024-11-20 11:40:23.853035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:18.305 [2024-11-20 11:40:23.853046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:18.305 [2024-11-20 11:40:23.853056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.305 [2024-11-20 11:40:23.853111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:18.305 [2024-11-20 11:40:23.853133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:18.305 [2024-11-20 11:40:23.853151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:18.305 [2024-11-20 11:40:23.853163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.305 [2024-11-20 11:40:23.853182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:18.305 [2024-11-20 11:40:23.853197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:18.305 [2024-11-20 11:40:23.853207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:18.305 [2024-11-20 11:40:23.853217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.305 [2024-11-20 11:40:23.983876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:18.305 [2024-11-20 11:40:23.983957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:18.305 [2024-11-20 11:40:23.983974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:18.305 [2024-11-20 11:40:23.983984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.564 [2024-11-20 11:40:24.089913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:18.564 [2024-11-20 11:40:24.089976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:18.564 [2024-11-20 11:40:24.089992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:18.564 [2024-11-20 11:40:24.090002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.564 [2024-11-20 11:40:24.090132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:18.564 [2024-11-20 11:40:24.090144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:18.564 [2024-11-20 11:40:24.090155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:18.564 [2024-11-20 11:40:24.090165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.564 [2024-11-20 11:40:24.090196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:18.564 [2024-11-20 11:40:24.090223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:18.564 [2024-11-20 11:40:24.090242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:18.564 [2024-11-20 11:40:24.090253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.564 [2024-11-20 11:40:24.090379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:18.564 [2024-11-20 11:40:24.090392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:18.564 [2024-11-20 11:40:24.090420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:18.564 [2024-11-20 11:40:24.090431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.564 [2024-11-20 11:40:24.090470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:18.564 [2024-11-20 11:40:24.090483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:18.564 [2024-11-20 11:40:24.090495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:18.564 [2024-11-20 11:40:24.090510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.564 [2024-11-20 11:40:24.090570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:18.564 [2024-11-20 11:40:24.090584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:18.564 [2024-11-20 11:40:24.090595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:18.564 [2024-11-20 11:40:24.090606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.564 [2024-11-20 11:40:24.090653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:18.564 [2024-11-20 11:40:24.090665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:18.564 [2024-11-20 11:40:24.090681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:18.564 [2024-11-20 11:40:24.090692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.564 [2024-11-20 11:40:24.090834] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 556.031 ms, result 0 00:26:19.500 00:26:19.500 00:26:19.500 11:40:25 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79504 00:26:19.500 11:40:25 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:26:19.500 11:40:25 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79504 00:26:19.500 11:40:25 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79504 ']' 00:26:19.500 11:40:25 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.500 11:40:25 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:19.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.500 11:40:25 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.500 11:40:25 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:19.500 11:40:25 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:26:19.759 [2024-11-20 11:40:25.284899] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:26:19.759 [2024-11-20 11:40:25.285047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79504 ] 00:26:19.759 [2024-11-20 11:40:25.456757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.019 [2024-11-20 11:40:25.581998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.957 11:40:26 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.957 11:40:26 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:26:20.957 11:40:26 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:26:20.957 [2024-11-20 11:40:26.663090] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:20.957 [2024-11-20 11:40:26.663176] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:21.218 [2024-11-20 11:40:26.827392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.218 [2024-11-20 11:40:26.827445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:21.218 [2024-11-20 11:40:26.827480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:21.218 [2024-11-20 11:40:26.827502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.218 [2024-11-20 11:40:26.831178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.218 [2024-11-20 11:40:26.831217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:21.218 [2024-11-20 11:40:26.831248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.653 ms 00:26:21.218 [2024-11-20 11:40:26.831258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.218 [2024-11-20 11:40:26.831366] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:21.218 [2024-11-20 11:40:26.832464] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:21.218 [2024-11-20 11:40:26.832508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.218 [2024-11-20 11:40:26.832519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:21.218 [2024-11-20 11:40:26.832532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.153 ms 00:26:21.218 [2024-11-20 11:40:26.832542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.218 [2024-11-20 11:40:26.833990] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:21.218 [2024-11-20 11:40:26.853558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.218 [2024-11-20 11:40:26.853601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:21.218 [2024-11-20 11:40:26.853616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.571 ms 00:26:21.218 [2024-11-20 11:40:26.853630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.218 [2024-11-20 11:40:26.853730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.218 [2024-11-20 11:40:26.853747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:21.218 [2024-11-20 11:40:26.853758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:21.218 [2024-11-20 11:40:26.853771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.218 [2024-11-20 11:40:26.860435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.218 [2024-11-20 11:40:26.860487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:21.218 [2024-11-20 11:40:26.860500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.603 ms 00:26:21.218 [2024-11-20 11:40:26.860514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.218 [2024-11-20 11:40:26.860657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.218 [2024-11-20 11:40:26.860677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:21.218 [2024-11-20 11:40:26.860689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:26:21.218 [2024-11-20 11:40:26.860703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.218 [2024-11-20 11:40:26.860749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.218 [2024-11-20 11:40:26.860765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:21.218 [2024-11-20 11:40:26.860775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:21.218 [2024-11-20 11:40:26.860790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.218 [2024-11-20 11:40:26.860818] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:21.218 [2024-11-20 11:40:26.865676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.218 [2024-11-20 11:40:26.865708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:21.218 [2024-11-20 11:40:26.865725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.859 ms 00:26:21.218 [2024-11-20 11:40:26.865736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.218 [2024-11-20 11:40:26.865815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.218 [2024-11-20 11:40:26.865828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:21.218 [2024-11-20 11:40:26.865844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:21.218 [2024-11-20 11:40:26.865860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.218 [2024-11-20 11:40:26.865888] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:21.218 [2024-11-20 11:40:26.865912] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:21.218 [2024-11-20 11:40:26.865962] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:21.218 [2024-11-20 11:40:26.865982] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:21.218 [2024-11-20 11:40:26.866079] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:21.218 [2024-11-20 11:40:26.866092] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:21.218 [2024-11-20 11:40:26.866111] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:21.218 [2024-11-20 11:40:26.866130] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:21.218 [2024-11-20 11:40:26.866147] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:21.218 [2024-11-20 11:40:26.866159] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:21.218 [2024-11-20 11:40:26.866175] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:21.218 [2024-11-20 11:40:26.866185] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:21.218 [2024-11-20 11:40:26.866220] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:21.218 [2024-11-20 11:40:26.866231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.218 [2024-11-20 11:40:26.866246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:21.218 [2024-11-20 11:40:26.866257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:26:21.218 [2024-11-20 11:40:26.866271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.218 [2024-11-20 11:40:26.866353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.218 [2024-11-20 11:40:26.866369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:21.218 [2024-11-20 11:40:26.866380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:26:21.218 [2024-11-20 11:40:26.866394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.218 [2024-11-20 11:40:26.866498] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:21.218 [2024-11-20 11:40:26.866516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:21.218 [2024-11-20 11:40:26.866527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:21.218 [2024-11-20 11:40:26.866542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.218 [2024-11-20 11:40:26.866553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:21.218 [2024-11-20 11:40:26.866567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:21.218 [2024-11-20 11:40:26.866577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:21.218 [2024-11-20 11:40:26.866598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:21.218 [2024-11-20 11:40:26.866608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:21.218 [2024-11-20 11:40:26.866623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:21.218 [2024-11-20 11:40:26.866633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:21.218 [2024-11-20 11:40:26.866646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:21.218 [2024-11-20 11:40:26.866656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:21.218 [2024-11-20 11:40:26.866670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:21.218 [2024-11-20 11:40:26.866680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:21.218 [2024-11-20 11:40:26.866694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.218 [2024-11-20 11:40:26.866704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:21.218 [2024-11-20 11:40:26.866717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:21.219 [2024-11-20 11:40:26.866727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.219 [2024-11-20 11:40:26.866741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:21.219 [2024-11-20 11:40:26.866762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:21.219 [2024-11-20 11:40:26.866777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:21.219 [2024-11-20 11:40:26.866786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:21.219 [2024-11-20 11:40:26.866805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:21.219 [2024-11-20 11:40:26.866815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:21.219 [2024-11-20 11:40:26.866829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:21.219 [2024-11-20 11:40:26.866838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:21.219 [2024-11-20 11:40:26.866852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:21.219 [2024-11-20 11:40:26.866862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:21.219 [2024-11-20 11:40:26.866875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:21.219 [2024-11-20 11:40:26.866885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:21.219 [2024-11-20 11:40:26.866899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:21.219 [2024-11-20 11:40:26.866909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:21.219 [2024-11-20 11:40:26.866924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:21.219 [2024-11-20 11:40:26.866933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:21.219 [2024-11-20 11:40:26.866947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:21.219 [2024-11-20 11:40:26.866957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:21.219 [2024-11-20 11:40:26.866971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:21.219 [2024-11-20 11:40:26.866980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:21.219 [2024-11-20 11:40:26.866999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.219 [2024-11-20 11:40:26.867009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:21.219 [2024-11-20 11:40:26.867023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:21.219 [2024-11-20 11:40:26.867033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.219 [2024-11-20 11:40:26.867047] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:21.219 [2024-11-20 11:40:26.867057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:21.219 [2024-11-20 11:40:26.867077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:21.219 [2024-11-20 11:40:26.867087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.219 [2024-11-20 11:40:26.867102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:21.219 [2024-11-20 11:40:26.867112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:21.219 [2024-11-20 11:40:26.867126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:21.219 [2024-11-20 11:40:26.867136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:21.219 [2024-11-20 11:40:26.867151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:21.219 [2024-11-20 11:40:26.867160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:21.219 [2024-11-20 11:40:26.867176] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:21.219 [2024-11-20 11:40:26.867189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:21.219 [2024-11-20 11:40:26.867209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:21.219 [2024-11-20 11:40:26.867220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:21.219 [2024-11-20 11:40:26.867236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:21.219 [2024-11-20 11:40:26.867247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:21.219 [2024-11-20 11:40:26.867262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:21.219 [2024-11-20 11:40:26.867272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:21.219 [2024-11-20 11:40:26.867287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:21.219 [2024-11-20 11:40:26.867298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:21.219 [2024-11-20 11:40:26.867312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:21.219 [2024-11-20 11:40:26.867323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:21.219 [2024-11-20 11:40:26.867338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:21.219 [2024-11-20 11:40:26.867349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:21.219 [2024-11-20 11:40:26.867363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:21.219 [2024-11-20 11:40:26.867374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:21.219 [2024-11-20 11:40:26.867389] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:21.219 [2024-11-20 11:40:26.867401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:21.219 [2024-11-20 11:40:26.867422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:21.219 [2024-11-20 11:40:26.867433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:21.219 [2024-11-20 11:40:26.867449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:21.219 [2024-11-20 11:40:26.867459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:21.219 [2024-11-20 11:40:26.867484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.219 [2024-11-20 11:40:26.867495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:21.219 [2024-11-20 11:40:26.867510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.049 ms 00:26:21.219 [2024-11-20 11:40:26.867521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.219 [2024-11-20 11:40:26.908837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.219 [2024-11-20 11:40:26.908879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:21.219 [2024-11-20 11:40:26.908915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.241 ms 00:26:21.219 [2024-11-20 11:40:26.908926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.219 [2024-11-20 11:40:26.909084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.219 [2024-11-20 11:40:26.909098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:21.219 [2024-11-20 11:40:26.909114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:26:21.219 [2024-11-20 11:40:26.909137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.219 [2024-11-20 11:40:26.957855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.219 [2024-11-20 11:40:26.957898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:21.219 [2024-11-20 11:40:26.957936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.670 ms 00:26:21.220 [2024-11-20 11:40:26.957948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.220 [2024-11-20 11:40:26.958060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.220 [2024-11-20 11:40:26.958073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:21.220 [2024-11-20 11:40:26.958086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:21.220 [2024-11-20 11:40:26.958097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.220 [2024-11-20 11:40:26.958544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.220 [2024-11-20 11:40:26.958566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:21.220 [2024-11-20 11:40:26.958588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:26:21.220 [2024-11-20 11:40:26.958599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.220 [2024-11-20 11:40:26.958727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.220 [2024-11-20 11:40:26.958742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:21.220 [2024-11-20 11:40:26.958757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:26:21.220 [2024-11-20 11:40:26.958768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.479 [2024-11-20 11:40:26.980779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.479 [2024-11-20 11:40:26.980819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:21.479 [2024-11-20 11:40:26.980839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.978 ms 00:26:21.479 [2024-11-20 11:40:26.980866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.479 [2024-11-20 11:40:26.999784] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:21.479 [2024-11-20 11:40:26.999836] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:21.479 [2024-11-20 11:40:26.999872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.479 [2024-11-20 11:40:26.999883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:21.479 [2024-11-20 11:40:26.999897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.872 ms 00:26:21.479 [2024-11-20 11:40:26.999907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.479 [2024-11-20 11:40:27.029536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.479 [2024-11-20 11:40:27.029576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:21.479 [2024-11-20 11:40:27.029609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.544 ms 00:26:21.479 [2024-11-20 11:40:27.029620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.479 [2024-11-20 11:40:27.048051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.479 [2024-11-20 11:40:27.048095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:21.479 [2024-11-20 11:40:27.048113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.343 ms 00:26:21.479 [2024-11-20 11:40:27.048139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.479 [2024-11-20 11:40:27.066231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.479 [2024-11-20 11:40:27.066266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:21.479 [2024-11-20 11:40:27.066281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.011 ms 00:26:21.479 [2024-11-20 11:40:27.066291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.479 [2024-11-20 11:40:27.067144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.479 [2024-11-20 11:40:27.067170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:21.479 [2024-11-20 11:40:27.067187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:26:21.479 [2024-11-20 11:40:27.067198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.479 [2024-11-20 11:40:27.165856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.479 [2024-11-20 11:40:27.165928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:21.480 [2024-11-20 11:40:27.165955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.625 ms 00:26:21.480 [2024-11-20 11:40:27.165967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.480 [2024-11-20 11:40:27.177850] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:21.480 [2024-11-20 11:40:27.194677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.480 [2024-11-20 11:40:27.194760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:21.480 [2024-11-20 11:40:27.194782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.576 ms 00:26:21.480 [2024-11-20 11:40:27.194798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.480 [2024-11-20 11:40:27.194936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.480 [2024-11-20 11:40:27.194956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:21.480 [2024-11-20 11:40:27.194968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:21.480 [2024-11-20 11:40:27.194984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.480 [2024-11-20 11:40:27.195041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.480 [2024-11-20 11:40:27.195058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:21.480 [2024-11-20 11:40:27.195069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:21.480 [2024-11-20 11:40:27.195086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.480 [2024-11-20 11:40:27.195118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.480 [2024-11-20 11:40:27.195134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:21.480 [2024-11-20 11:40:27.195144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:21.480 [2024-11-20 11:40:27.195162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.480 [2024-11-20 11:40:27.195203] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:21.480 [2024-11-20 11:40:27.195225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.480 [2024-11-20 11:40:27.195236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:21.480 [2024-11-20 11:40:27.195259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:21.480 [2024-11-20 11:40:27.195269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.480 [2024-11-20 11:40:27.233328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.480 [2024-11-20 11:40:27.233372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:21.480 [2024-11-20 11:40:27.233389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.018 ms 00:26:21.480 [2024-11-20 11:40:27.233400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.480 [2024-11-20 11:40:27.233526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.480 [2024-11-20 11:40:27.233541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:21.480 [2024-11-20 11:40:27.233555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:21.480 [2024-11-20 11:40:27.233568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.480 [2024-11-20 11:40:27.234615] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:21.740 [2024-11-20 11:40:27.239359] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 406.824 ms, result 0 00:26:21.740 [2024-11-20 11:40:27.240718] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:21.740 Some configs were skipped because the RPC state that can call them passed over. 00:26:21.740 11:40:27 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:26:21.740 [2024-11-20 11:40:27.469675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.740 [2024-11-20 11:40:27.469744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:26:21.740 [2024-11-20 11:40:27.469762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.261 ms 00:26:21.740 [2024-11-20 11:40:27.469777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.740 [2024-11-20 11:40:27.469816] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.408 ms, result 0 00:26:21.740 true 00:26:21.740 11:40:27 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:26:21.999 [2024-11-20 11:40:27.737909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.999 [2024-11-20 11:40:27.738151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:26:21.999 [2024-11-20 11:40:27.738189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.190 ms 00:26:21.999 [2024-11-20 11:40:27.738204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.999 [2024-11-20 11:40:27.738293] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.555 ms, result 0 00:26:21.999 true 00:26:21.999 11:40:27 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79504 00:26:21.999 11:40:27 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79504 ']' 00:26:21.999 11:40:27 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79504 00:26:21.999 11:40:27 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:26:22.258 11:40:27 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:22.258 11:40:27 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79504 00:26:22.258 killing process with pid 79504 00:26:22.258 11:40:27 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:22.258 11:40:27 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:22.258 11:40:27 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79504' 00:26:22.258 11:40:27 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79504 00:26:22.258 11:40:27 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79504 00:26:23.196 [2024-11-20 11:40:28.944983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.196 [2024-11-20 11:40:28.945052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:23.196 [2024-11-20 11:40:28.945068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:23.196 [2024-11-20 11:40:28.945081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.196 [2024-11-20 11:40:28.945104] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:23.196 [2024-11-20 11:40:28.949718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.196 [2024-11-20 11:40:28.949760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:23.196 [2024-11-20 11:40:28.949782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.592 ms 00:26:23.196 [2024-11-20 11:40:28.949794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.196 [2024-11-20 11:40:28.950106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.196 [2024-11-20 11:40:28.950127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:23.196 [2024-11-20 11:40:28.950144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:26:23.196 [2024-11-20 11:40:28.950156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.196 [2024-11-20 11:40:28.953519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.196 [2024-11-20 11:40:28.953553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:23.196 [2024-11-20 11:40:28.953571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.337 ms 00:26:23.196 [2024-11-20 11:40:28.953582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.456 [2024-11-20 11:40:28.959356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.456 [2024-11-20 11:40:28.959389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:23.456 [2024-11-20 11:40:28.959405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.733 ms 00:26:23.456 [2024-11-20 11:40:28.959415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.456 [2024-11-20 11:40:28.974790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.456 [2024-11-20 11:40:28.974836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:23.456 [2024-11-20 11:40:28.974857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.316 ms 00:26:23.456 [2024-11-20 11:40:28.974876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.456 [2024-11-20 11:40:28.985048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.456 [2024-11-20 11:40:28.985086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:23.456 [2024-11-20 11:40:28.985105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.099 ms 00:26:23.456 [2024-11-20 11:40:28.985116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.456 [2024-11-20 11:40:28.985271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.456 [2024-11-20 11:40:28.985285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:23.456 [2024-11-20 11:40:28.985299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:26:23.456 [2024-11-20 11:40:28.985310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.456 [2024-11-20 11:40:29.001321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.456 [2024-11-20 11:40:29.001356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:23.456 [2024-11-20 11:40:29.001372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.986 ms 00:26:23.456 [2024-11-20 11:40:29.001383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.456 [2024-11-20 11:40:29.016581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.456 [2024-11-20 11:40:29.016613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:23.456 [2024-11-20 11:40:29.016648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.144 ms 00:26:23.456 [2024-11-20 11:40:29.016658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.456 [2024-11-20 11:40:29.031336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.456 [2024-11-20 11:40:29.031503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:23.456 [2024-11-20 11:40:29.031531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.627 ms 00:26:23.456 [2024-11-20 11:40:29.031541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.456 [2024-11-20 11:40:29.046305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.457 [2024-11-20 11:40:29.046337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:23.457 [2024-11-20 11:40:29.046353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.681 ms 00:26:23.457 [2024-11-20 11:40:29.046378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.457 [2024-11-20 11:40:29.046437] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:23.457 [2024-11-20 11:40:29.046454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.046996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:23.457 [2024-11-20 11:40:29.047774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:23.458 [2024-11-20 11:40:29.047784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:23.458 [2024-11-20 11:40:29.047797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:23.458 [2024-11-20 11:40:29.047808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:23.458 [2024-11-20 11:40:29.047821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:23.458 [2024-11-20 11:40:29.047832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:23.458 [2024-11-20 11:40:29.047848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:23.458 [2024-11-20 11:40:29.047860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:23.458 [2024-11-20 11:40:29.047877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:23.458 [2024-11-20 11:40:29.047888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:23.458 [2024-11-20 11:40:29.047903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:23.458 [2024-11-20 11:40:29.047915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:23.458 [2024-11-20 11:40:29.047930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:23.458 [2024-11-20 11:40:29.047941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:23.458 [2024-11-20 11:40:29.047958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:23.458 [2024-11-20 11:40:29.047976] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:23.458 [2024-11-20 11:40:29.048002] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 963441e8-3606-45d4-aa8c-a0a9c7a666bb 00:26:23.458 [2024-11-20 11:40:29.048027] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:23.458 [2024-11-20 11:40:29.048048] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:23.458 [2024-11-20 11:40:29.048058] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:23.458 [2024-11-20 11:40:29.048074] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:23.458 [2024-11-20 11:40:29.048084] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:23.458 [2024-11-20 11:40:29.048100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:23.458 [2024-11-20 11:40:29.048110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:23.458 [2024-11-20 11:40:29.048124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:23.458 [2024-11-20 11:40:29.048134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:23.458 [2024-11-20 11:40:29.048150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.458 [2024-11-20 11:40:29.048161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:23.458 [2024-11-20 11:40:29.048177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.712 ms 00:26:23.458 [2024-11-20 11:40:29.048188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.458 [2024-11-20 11:40:29.068855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.458 [2024-11-20 11:40:29.068999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:23.458 [2024-11-20 11:40:29.069139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.624 ms 00:26:23.458 [2024-11-20 11:40:29.069249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.458 [2024-11-20 11:40:29.069915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:23.458 [2024-11-20 11:40:29.069967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:23.458 [2024-11-20 11:40:29.070066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:26:23.458 [2024-11-20 11:40:29.070141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.458 [2024-11-20 11:40:29.141285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.458 [2024-11-20 11:40:29.141453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:23.458 [2024-11-20 11:40:29.141627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.458 [2024-11-20 11:40:29.141667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.458 [2024-11-20 11:40:29.141807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.458 [2024-11-20 11:40:29.141929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:23.458 [2024-11-20 11:40:29.141983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.458 [2024-11-20 11:40:29.142018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.458 [2024-11-20 11:40:29.142098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.458 [2024-11-20 11:40:29.142135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:23.458 [2024-11-20 11:40:29.142172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.458 [2024-11-20 11:40:29.142351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.458 [2024-11-20 11:40:29.142436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.458 [2024-11-20 11:40:29.142488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:23.458 [2024-11-20 11:40:29.142637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.458 [2024-11-20 11:40:29.142673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.722 [2024-11-20 11:40:29.276698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.722 [2024-11-20 11:40:29.276866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:23.722 [2024-11-20 11:40:29.277030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.722 [2024-11-20 11:40:29.277071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.722 [2024-11-20 11:40:29.387747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.722 [2024-11-20 11:40:29.387918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:23.722 [2024-11-20 11:40:29.388046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.722 [2024-11-20 11:40:29.388093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.722 [2024-11-20 11:40:29.388276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.722 [2024-11-20 11:40:29.388386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:23.722 [2024-11-20 11:40:29.388486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.722 [2024-11-20 11:40:29.388527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.722 [2024-11-20 11:40:29.388594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.722 [2024-11-20 11:40:29.388784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:23.722 [2024-11-20 11:40:29.388830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.722 [2024-11-20 11:40:29.388863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.722 [2024-11-20 11:40:29.389040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.722 [2024-11-20 11:40:29.389081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:23.722 [2024-11-20 11:40:29.389119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.722 [2024-11-20 11:40:29.389183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.722 [2024-11-20 11:40:29.389316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.722 [2024-11-20 11:40:29.389411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:23.722 [2024-11-20 11:40:29.389532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.722 [2024-11-20 11:40:29.389575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.722 [2024-11-20 11:40:29.389655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.722 [2024-11-20 11:40:29.389824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:23.722 [2024-11-20 11:40:29.389871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.722 [2024-11-20 11:40:29.389905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.722 [2024-11-20 11:40:29.389983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:23.722 [2024-11-20 11:40:29.390177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:23.722 [2024-11-20 11:40:29.390221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:23.722 [2024-11-20 11:40:29.390266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:23.722 [2024-11-20 11:40:29.390439] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 445.425 ms, result 0 00:26:25.130 11:40:30 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:25.130 [2024-11-20 11:40:30.637250] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:26:25.130 [2024-11-20 11:40:30.637660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79572 ] 00:26:25.130 [2024-11-20 11:40:30.830014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.389 [2024-11-20 11:40:30.957396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.647 [2024-11-20 11:40:31.337367] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:25.647 [2024-11-20 11:40:31.337657] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:25.907 [2024-11-20 11:40:31.501197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.907 [2024-11-20 11:40:31.501412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:25.907 [2024-11-20 11:40:31.501439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:25.907 [2024-11-20 11:40:31.501453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.907 [2024-11-20 11:40:31.504838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.907 [2024-11-20 11:40:31.504877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:25.907 [2024-11-20 11:40:31.504890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.335 ms 00:26:25.907 [2024-11-20 11:40:31.504900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.907 [2024-11-20 11:40:31.505042] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:25.907 [2024-11-20 11:40:31.506116] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:25.907 [2024-11-20 11:40:31.506151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.907 [2024-11-20 11:40:31.506162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:25.907 [2024-11-20 11:40:31.506173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.116 ms 00:26:25.907 [2024-11-20 11:40:31.506184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.907 [2024-11-20 11:40:31.507647] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:25.907 [2024-11-20 11:40:31.526551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.907 [2024-11-20 11:40:31.526592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:25.907 [2024-11-20 11:40:31.526609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.904 ms 00:26:25.907 [2024-11-20 11:40:31.526620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.907 [2024-11-20 11:40:31.526721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.907 [2024-11-20 11:40:31.526737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:25.907 [2024-11-20 11:40:31.526749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:26:25.907 [2024-11-20 11:40:31.526759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.907 [2024-11-20 11:40:31.533360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.907 [2024-11-20 11:40:31.533393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:25.907 [2024-11-20 11:40:31.533408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.552 ms 00:26:25.907 [2024-11-20 11:40:31.533421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.907 [2024-11-20 11:40:31.533553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.907 [2024-11-20 11:40:31.533571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:25.907 [2024-11-20 11:40:31.533586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:26:25.907 [2024-11-20 11:40:31.533599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.907 [2024-11-20 11:40:31.533634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.907 [2024-11-20 11:40:31.533651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:25.907 [2024-11-20 11:40:31.533664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:25.907 [2024-11-20 11:40:31.533677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.907 [2024-11-20 11:40:31.533706] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:26:25.907 [2024-11-20 11:40:31.538744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.907 [2024-11-20 11:40:31.538777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:25.907 [2024-11-20 11:40:31.538790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.046 ms 00:26:25.907 [2024-11-20 11:40:31.538800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.907 [2024-11-20 11:40:31.538868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.907 [2024-11-20 11:40:31.538881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:25.907 [2024-11-20 11:40:31.538893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:25.907 [2024-11-20 11:40:31.538905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.907 [2024-11-20 11:40:31.538934] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:25.907 [2024-11-20 11:40:31.538960] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:25.907 [2024-11-20 11:40:31.538996] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:25.907 [2024-11-20 11:40:31.539014] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:25.907 [2024-11-20 11:40:31.539109] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:25.907 [2024-11-20 11:40:31.539126] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:25.907 [2024-11-20 11:40:31.539142] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:25.907 [2024-11-20 11:40:31.539159] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:25.907 [2024-11-20 11:40:31.539178] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:25.907 [2024-11-20 11:40:31.539194] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:26:25.907 [2024-11-20 11:40:31.539206] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:25.907 [2024-11-20 11:40:31.539216] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:25.907 [2024-11-20 11:40:31.539226] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:25.907 [2024-11-20 11:40:31.539237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.907 [2024-11-20 11:40:31.539247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:25.907 [2024-11-20 11:40:31.539258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:26:25.907 [2024-11-20 11:40:31.539267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.907 [2024-11-20 11:40:31.539345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.907 [2024-11-20 11:40:31.539358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:25.907 [2024-11-20 11:40:31.539376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:25.907 [2024-11-20 11:40:31.539386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.907 [2024-11-20 11:40:31.539498] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:25.907 [2024-11-20 11:40:31.539512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:25.907 [2024-11-20 11:40:31.539523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:25.907 [2024-11-20 11:40:31.539538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.907 [2024-11-20 11:40:31.539549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:25.907 [2024-11-20 11:40:31.539558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:25.907 [2024-11-20 11:40:31.539568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:26:25.907 [2024-11-20 11:40:31.539579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:25.907 [2024-11-20 11:40:31.539588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:26:25.907 [2024-11-20 11:40:31.539598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:25.907 [2024-11-20 11:40:31.539608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:25.907 [2024-11-20 11:40:31.539619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:26:25.907 [2024-11-20 11:40:31.539628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:25.907 [2024-11-20 11:40:31.539653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:25.907 [2024-11-20 11:40:31.539663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:26:25.907 [2024-11-20 11:40:31.539672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.907 [2024-11-20 11:40:31.539682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:25.907 [2024-11-20 11:40:31.539691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:26:25.907 [2024-11-20 11:40:31.539701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.907 [2024-11-20 11:40:31.539710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:25.907 [2024-11-20 11:40:31.539720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:26:25.907 [2024-11-20 11:40:31.539729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:25.907 [2024-11-20 11:40:31.539739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:25.907 [2024-11-20 11:40:31.539748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:26:25.907 [2024-11-20 11:40:31.539757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:25.907 [2024-11-20 11:40:31.539767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:25.907 [2024-11-20 11:40:31.539776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:26:25.907 [2024-11-20 11:40:31.539785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:25.907 [2024-11-20 11:40:31.539795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:25.907 [2024-11-20 11:40:31.539804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:26:25.907 [2024-11-20 11:40:31.539813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:25.908 [2024-11-20 11:40:31.539827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:25.908 [2024-11-20 11:40:31.539839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:26:25.908 [2024-11-20 11:40:31.539851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:25.908 [2024-11-20 11:40:31.539864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:25.908 [2024-11-20 11:40:31.539876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:26:25.908 [2024-11-20 11:40:31.539888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:25.908 [2024-11-20 11:40:31.539900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:25.908 [2024-11-20 11:40:31.539913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:26:25.908 [2024-11-20 11:40:31.539925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.908 [2024-11-20 11:40:31.539937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:25.908 [2024-11-20 11:40:31.539949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:26:25.908 [2024-11-20 11:40:31.539962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.908 [2024-11-20 11:40:31.539976] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:25.908 [2024-11-20 11:40:31.539989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:25.908 [2024-11-20 11:40:31.540002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:25.908 [2024-11-20 11:40:31.540019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:25.908 [2024-11-20 11:40:31.540032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:25.908 [2024-11-20 11:40:31.540045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:25.908 [2024-11-20 11:40:31.540057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:25.908 [2024-11-20 11:40:31.540070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:25.908 [2024-11-20 11:40:31.540082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:25.908 [2024-11-20 11:40:31.540094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:25.908 [2024-11-20 11:40:31.540108] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:25.908 [2024-11-20 11:40:31.540124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:25.908 [2024-11-20 11:40:31.540138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:26:25.908 [2024-11-20 11:40:31.540153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:26:25.908 [2024-11-20 11:40:31.540167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:26:25.908 [2024-11-20 11:40:31.540181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:26:25.908 [2024-11-20 11:40:31.540195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:26:25.908 [2024-11-20 11:40:31.540208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:26:25.908 [2024-11-20 11:40:31.540223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:26:25.908 [2024-11-20 11:40:31.540237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:26:25.908 [2024-11-20 11:40:31.540250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:26:25.908 [2024-11-20 11:40:31.540264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:26:25.908 [2024-11-20 11:40:31.540277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:26:25.908 [2024-11-20 11:40:31.540291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:26:25.908 [2024-11-20 11:40:31.540304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:26:25.908 [2024-11-20 11:40:31.540318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:26:25.908 [2024-11-20 11:40:31.540332] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:25.908 [2024-11-20 11:40:31.540346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:25.908 [2024-11-20 11:40:31.540362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:25.908 [2024-11-20 11:40:31.540376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:25.908 [2024-11-20 11:40:31.540389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:25.908 [2024-11-20 11:40:31.540403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:25.908 [2024-11-20 11:40:31.540418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.908 [2024-11-20 11:40:31.540431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:25.908 [2024-11-20 11:40:31.540448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:26:25.908 [2024-11-20 11:40:31.540461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.908 [2024-11-20 11:40:31.582329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.908 [2024-11-20 11:40:31.582383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:25.908 [2024-11-20 11:40:31.582400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.793 ms 00:26:25.908 [2024-11-20 11:40:31.582413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.908 [2024-11-20 11:40:31.582598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.908 [2024-11-20 11:40:31.582619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:25.908 [2024-11-20 11:40:31.582632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:25.908 [2024-11-20 11:40:31.582644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.908 [2024-11-20 11:40:31.641400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.908 [2024-11-20 11:40:31.641447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:25.908 [2024-11-20 11:40:31.641462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.725 ms 00:26:25.908 [2024-11-20 11:40:31.641493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.908 [2024-11-20 11:40:31.641641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.908 [2024-11-20 11:40:31.641655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:25.908 [2024-11-20 11:40:31.641667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:25.908 [2024-11-20 11:40:31.641678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.908 [2024-11-20 11:40:31.642135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.908 [2024-11-20 11:40:31.642156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:25.908 [2024-11-20 11:40:31.642169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:26:25.908 [2024-11-20 11:40:31.642198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.908 [2024-11-20 11:40:31.642320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.908 [2024-11-20 11:40:31.642334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:25.908 [2024-11-20 11:40:31.642345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:26:25.908 [2024-11-20 11:40:31.642355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.908 [2024-11-20 11:40:31.662822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:25.908 [2024-11-20 11:40:31.662869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:25.908 [2024-11-20 11:40:31.662887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.442 ms 00:26:25.908 [2024-11-20 11:40:31.662900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.166 [2024-11-20 11:40:31.684751] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:26.166 [2024-11-20 11:40:31.684795] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:26.166 [2024-11-20 11:40:31.684815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.166 [2024-11-20 11:40:31.684828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:26.166 [2024-11-20 11:40:31.684843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.752 ms 00:26:26.166 [2024-11-20 11:40:31.684856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.166 [2024-11-20 11:40:31.716571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.166 [2024-11-20 11:40:31.716754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:26.166 [2024-11-20 11:40:31.716779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.609 ms 00:26:26.166 [2024-11-20 11:40:31.716790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.166 [2024-11-20 11:40:31.736631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.166 [2024-11-20 11:40:31.736674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:26.166 [2024-11-20 11:40:31.736689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.712 ms 00:26:26.166 [2024-11-20 11:40:31.736700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.166 [2024-11-20 11:40:31.755908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.166 [2024-11-20 11:40:31.756051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:26.166 [2024-11-20 11:40:31.756078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.118 ms 00:26:26.166 [2024-11-20 11:40:31.756095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.166 [2024-11-20 11:40:31.756991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.166 [2024-11-20 11:40:31.757021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:26.166 [2024-11-20 11:40:31.757036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:26:26.166 [2024-11-20 11:40:31.757049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.166 [2024-11-20 11:40:31.843534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.166 [2024-11-20 11:40:31.843601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:26.166 [2024-11-20 11:40:31.843618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.450 ms 00:26:26.167 [2024-11-20 11:40:31.843630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.167 [2024-11-20 11:40:31.856033] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:26.167 [2024-11-20 11:40:31.872706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.167 [2024-11-20 11:40:31.872926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:26.167 [2024-11-20 11:40:31.872957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.941 ms 00:26:26.167 [2024-11-20 11:40:31.872971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.167 [2024-11-20 11:40:31.873139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.167 [2024-11-20 11:40:31.873174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:26.167 [2024-11-20 11:40:31.873189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:26.167 [2024-11-20 11:40:31.873204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.167 [2024-11-20 11:40:31.873267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.167 [2024-11-20 11:40:31.873283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:26.167 [2024-11-20 11:40:31.873298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:26.167 [2024-11-20 11:40:31.873312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.167 [2024-11-20 11:40:31.873349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.167 [2024-11-20 11:40:31.873368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:26.167 [2024-11-20 11:40:31.873382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:26.167 [2024-11-20 11:40:31.873397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.167 [2024-11-20 11:40:31.873436] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:26.167 [2024-11-20 11:40:31.873452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.167 [2024-11-20 11:40:31.873466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:26.167 [2024-11-20 11:40:31.873481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:26.167 [2024-11-20 11:40:31.873521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.167 [2024-11-20 11:40:31.910900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.167 [2024-11-20 11:40:31.910947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:26.167 [2024-11-20 11:40:31.910963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.344 ms 00:26:26.167 [2024-11-20 11:40:31.910974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.167 [2024-11-20 11:40:31.911099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:26.167 [2024-11-20 11:40:31.911114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:26.167 [2024-11-20 11:40:31.911125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:26.167 [2024-11-20 11:40:31.911135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:26.167 [2024-11-20 11:40:31.912138] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:26.167 [2024-11-20 11:40:31.916732] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 410.636 ms, result 0 00:26:26.167 [2024-11-20 11:40:31.917614] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:26.425 [2024-11-20 11:40:31.935688] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:27.412  [2024-11-20T11:40:34.110Z] Copying: 35/256 [MB] (35 MBps) [2024-11-20T11:40:35.045Z] Copying: 68/256 [MB] (32 MBps) [2024-11-20T11:40:36.423Z] Copying: 97/256 [MB] (29 MBps) [2024-11-20T11:40:37.358Z] Copying: 127/256 [MB] (29 MBps) [2024-11-20T11:40:38.311Z] Copying: 158/256 [MB] (31 MBps) [2024-11-20T11:40:39.245Z] Copying: 190/256 [MB] (32 MBps) [2024-11-20T11:40:40.180Z] Copying: 220/256 [MB] (30 MBps) [2024-11-20T11:40:40.180Z] Copying: 253/256 [MB] (32 MBps) [2024-11-20T11:40:40.180Z] Copying: 256/256 [MB] (average 31 MBps)[2024-11-20 11:40:40.125281] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:34.418 [2024-11-20 11:40:40.143303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.418 [2024-11-20 11:40:40.143511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:34.418 [2024-11-20 11:40:40.143616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:34.418 [2024-11-20 11:40:40.143668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.418 [2024-11-20 11:40:40.143845] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:26:34.418 [2024-11-20 11:40:40.148892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.418 [2024-11-20 11:40:40.149046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:34.418 [2024-11-20 11:40:40.149175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.884 ms 00:26:34.418 [2024-11-20 11:40:40.149221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.418 [2024-11-20 11:40:40.149614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.418 [2024-11-20 11:40:40.149739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:34.418 [2024-11-20 11:40:40.149829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:26:34.418 [2024-11-20 11:40:40.149870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.418 [2024-11-20 11:40:40.154149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.418 [2024-11-20 11:40:40.154294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:34.418 [2024-11-20 11:40:40.154372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.109 ms 00:26:34.418 [2024-11-20 11:40:40.154410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.418 [2024-11-20 11:40:40.161049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.418 [2024-11-20 11:40:40.161193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:34.418 [2024-11-20 11:40:40.161336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.554 ms 00:26:34.418 [2024-11-20 11:40:40.161378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.678 [2024-11-20 11:40:40.203582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.678 [2024-11-20 11:40:40.203785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:34.678 [2024-11-20 11:40:40.203979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.051 ms 00:26:34.678 [2024-11-20 11:40:40.204021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.678 [2024-11-20 11:40:40.227459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.678 [2024-11-20 11:40:40.227679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:34.678 [2024-11-20 11:40:40.227770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.332 ms 00:26:34.678 [2024-11-20 11:40:40.227821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.678 [2024-11-20 11:40:40.228056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.678 [2024-11-20 11:40:40.228216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:34.678 [2024-11-20 11:40:40.228260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:26:34.678 [2024-11-20 11:40:40.228294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.678 [2024-11-20 11:40:40.269300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.678 [2024-11-20 11:40:40.269498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:34.678 [2024-11-20 11:40:40.269649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.942 ms 00:26:34.678 [2024-11-20 11:40:40.269668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.678 [2024-11-20 11:40:40.312069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.678 [2024-11-20 11:40:40.312268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:34.678 [2024-11-20 11:40:40.312365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.322 ms 00:26:34.678 [2024-11-20 11:40:40.312407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.678 [2024-11-20 11:40:40.353698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.678 [2024-11-20 11:40:40.353914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:34.678 [2024-11-20 11:40:40.354055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.175 ms 00:26:34.678 [2024-11-20 11:40:40.354100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.678 [2024-11-20 11:40:40.394434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.678 [2024-11-20 11:40:40.394635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:34.678 [2024-11-20 11:40:40.394783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.182 ms 00:26:34.678 [2024-11-20 11:40:40.394813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.678 [2024-11-20 11:40:40.394884] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:34.678 [2024-11-20 11:40:40.394903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:34.678 [2024-11-20 11:40:40.394932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:34.678 [2024-11-20 11:40:40.394945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:34.678 [2024-11-20 11:40:40.394957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:34.678 [2024-11-20 11:40:40.394969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:34.678 [2024-11-20 11:40:40.394981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:34.678 [2024-11-20 11:40:40.394992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:34.678 [2024-11-20 11:40:40.395004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:34.678 [2024-11-20 11:40:40.395016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:34.678 [2024-11-20 11:40:40.395029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:34.678 [2024-11-20 11:40:40.395041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:34.678 [2024-11-20 11:40:40.395052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.395982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:34.679 [2024-11-20 11:40:40.396293] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:34.679 [2024-11-20 11:40:40.396306] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 963441e8-3606-45d4-aa8c-a0a9c7a666bb 00:26:34.680 [2024-11-20 11:40:40.396319] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:34.680 [2024-11-20 11:40:40.396330] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:34.680 [2024-11-20 11:40:40.396342] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:34.680 [2024-11-20 11:40:40.396365] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:34.680 [2024-11-20 11:40:40.396375] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:34.680 [2024-11-20 11:40:40.396387] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:34.680 [2024-11-20 11:40:40.396398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:34.680 [2024-11-20 11:40:40.396408] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:34.680 [2024-11-20 11:40:40.396419] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:34.680 [2024-11-20 11:40:40.396430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.680 [2024-11-20 11:40:40.396446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:34.680 [2024-11-20 11:40:40.396458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.547 ms 00:26:34.680 [2024-11-20 11:40:40.396469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.680 [2024-11-20 11:40:40.418931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.680 [2024-11-20 11:40:40.418974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:34.680 [2024-11-20 11:40:40.418990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.434 ms 00:26:34.680 [2024-11-20 11:40:40.419001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.680 [2024-11-20 11:40:40.419599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.680 [2024-11-20 11:40:40.419617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:34.680 [2024-11-20 11:40:40.419629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:26:34.680 [2024-11-20 11:40:40.419641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.939 [2024-11-20 11:40:40.483375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.939 [2024-11-20 11:40:40.483434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:34.939 [2024-11-20 11:40:40.483450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.939 [2024-11-20 11:40:40.483463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.939 [2024-11-20 11:40:40.483619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.939 [2024-11-20 11:40:40.483634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:34.939 [2024-11-20 11:40:40.483646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.939 [2024-11-20 11:40:40.483657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.939 [2024-11-20 11:40:40.483717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.939 [2024-11-20 11:40:40.483732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:34.939 [2024-11-20 11:40:40.483748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.939 [2024-11-20 11:40:40.483760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.940 [2024-11-20 11:40:40.483781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.940 [2024-11-20 11:40:40.483796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:34.940 [2024-11-20 11:40:40.483807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.940 [2024-11-20 11:40:40.483818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.940 [2024-11-20 11:40:40.624146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.940 [2024-11-20 11:40:40.624208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:34.940 [2024-11-20 11:40:40.624223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.940 [2024-11-20 11:40:40.624251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.199 [2024-11-20 11:40:40.741281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.199 [2024-11-20 11:40:40.741365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:35.199 [2024-11-20 11:40:40.741384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.199 [2024-11-20 11:40:40.741397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.199 [2024-11-20 11:40:40.741521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.199 [2024-11-20 11:40:40.741537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:35.199 [2024-11-20 11:40:40.741564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.199 [2024-11-20 11:40:40.741577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.199 [2024-11-20 11:40:40.741612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.199 [2024-11-20 11:40:40.741625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:35.199 [2024-11-20 11:40:40.741643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.199 [2024-11-20 11:40:40.741655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.199 [2024-11-20 11:40:40.741788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.199 [2024-11-20 11:40:40.741804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:35.199 [2024-11-20 11:40:40.741817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.199 [2024-11-20 11:40:40.741830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.199 [2024-11-20 11:40:40.741877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.199 [2024-11-20 11:40:40.741892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:35.199 [2024-11-20 11:40:40.741904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.199 [2024-11-20 11:40:40.741921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.199 [2024-11-20 11:40:40.741967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.199 [2024-11-20 11:40:40.741980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:35.199 [2024-11-20 11:40:40.741993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.199 [2024-11-20 11:40:40.742005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.199 [2024-11-20 11:40:40.742055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:35.199 [2024-11-20 11:40:40.742075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:35.199 [2024-11-20 11:40:40.742092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:35.199 [2024-11-20 11:40:40.742104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:35.199 [2024-11-20 11:40:40.742258] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 598.953 ms, result 0 00:26:36.135 00:26:36.135 00:26:36.407 11:40:41 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:36.696 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:26:36.696 11:40:42 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:26:36.696 11:40:42 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:26:36.696 11:40:42 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:36.696 11:40:42 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:36.696 11:40:42 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:26:36.954 11:40:42 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:26:36.954 Process with pid 79504 is not found 00:26:36.954 11:40:42 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79504 00:26:36.954 11:40:42 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79504 ']' 00:26:36.954 11:40:42 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79504 00:26:36.954 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79504) - No such process 00:26:36.955 11:40:42 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 79504 is not found' 00:26:36.955 ************************************ 00:26:36.955 END TEST ftl_trim 00:26:36.955 ************************************ 00:26:36.955 00:26:36.955 real 1m7.995s 00:26:36.955 user 1m37.978s 00:26:36.955 sys 0m7.297s 00:26:36.955 11:40:42 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:36.955 11:40:42 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:26:36.955 11:40:42 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:26:36.955 11:40:42 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:36.955 11:40:42 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:36.955 11:40:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:36.955 ************************************ 00:26:36.955 START TEST ftl_restore 00:26:36.955 ************************************ 00:26:36.955 11:40:42 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:26:36.955 * Looking for test storage... 00:26:36.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:36.955 11:40:42 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:36.955 11:40:42 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:26:36.955 11:40:42 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:37.213 11:40:42 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:37.213 11:40:42 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:26:37.213 11:40:42 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:37.213 11:40:42 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:37.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.213 --rc genhtml_branch_coverage=1 00:26:37.213 --rc genhtml_function_coverage=1 00:26:37.213 --rc genhtml_legend=1 00:26:37.213 --rc geninfo_all_blocks=1 00:26:37.213 --rc geninfo_unexecuted_blocks=1 00:26:37.213 00:26:37.213 ' 00:26:37.213 11:40:42 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:37.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.213 --rc genhtml_branch_coverage=1 00:26:37.213 --rc genhtml_function_coverage=1 00:26:37.213 --rc genhtml_legend=1 00:26:37.213 --rc geninfo_all_blocks=1 00:26:37.213 --rc geninfo_unexecuted_blocks=1 00:26:37.213 00:26:37.213 ' 00:26:37.213 11:40:42 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:37.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.213 --rc genhtml_branch_coverage=1 00:26:37.213 --rc genhtml_function_coverage=1 00:26:37.213 --rc genhtml_legend=1 00:26:37.213 --rc geninfo_all_blocks=1 00:26:37.213 --rc geninfo_unexecuted_blocks=1 00:26:37.213 00:26:37.213 ' 00:26:37.213 11:40:42 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:37.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.213 --rc genhtml_branch_coverage=1 00:26:37.213 --rc genhtml_function_coverage=1 00:26:37.213 --rc genhtml_legend=1 00:26:37.213 --rc geninfo_all_blocks=1 00:26:37.213 --rc geninfo_unexecuted_blocks=1 00:26:37.213 00:26:37.213 ' 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.iC4rU8kRnW 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79757 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79757 00:26:37.213 11:40:42 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:37.213 11:40:42 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79757 ']' 00:26:37.213 11:40:42 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.213 11:40:42 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:37.214 11:40:42 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.214 11:40:42 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:37.214 11:40:42 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:37.214 [2024-11-20 11:40:42.933410] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:26:37.214 [2024-11-20 11:40:42.933754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79757 ] 00:26:37.471 [2024-11-20 11:40:43.116672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.729 [2024-11-20 11:40:43.304243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.665 11:40:44 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:38.665 11:40:44 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:26:38.665 11:40:44 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:38.665 11:40:44 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:26:38.665 11:40:44 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:38.665 11:40:44 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:26:38.665 11:40:44 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:26:38.665 11:40:44 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:38.924 11:40:44 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:38.924 11:40:44 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:26:38.924 11:40:44 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:38.924 11:40:44 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:38.924 11:40:44 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:38.924 11:40:44 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:26:38.924 11:40:44 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:26:38.924 11:40:44 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:39.183 11:40:44 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:39.183 { 00:26:39.183 "name": "nvme0n1", 00:26:39.183 "aliases": [ 00:26:39.183 "763a2069-22b2-4f11-bc14-ea03167b4a16" 00:26:39.183 ], 00:26:39.183 "product_name": "NVMe disk", 00:26:39.183 "block_size": 4096, 00:26:39.183 "num_blocks": 1310720, 00:26:39.183 "uuid": "763a2069-22b2-4f11-bc14-ea03167b4a16", 00:26:39.183 "numa_id": -1, 00:26:39.183 "assigned_rate_limits": { 00:26:39.183 "rw_ios_per_sec": 0, 00:26:39.183 "rw_mbytes_per_sec": 0, 00:26:39.183 "r_mbytes_per_sec": 0, 00:26:39.183 "w_mbytes_per_sec": 0 00:26:39.183 }, 00:26:39.183 "claimed": true, 00:26:39.183 "claim_type": "read_many_write_one", 00:26:39.183 "zoned": false, 00:26:39.183 "supported_io_types": { 00:26:39.183 "read": true, 00:26:39.183 "write": true, 00:26:39.183 "unmap": true, 00:26:39.183 "flush": true, 00:26:39.183 "reset": true, 00:26:39.183 "nvme_admin": true, 00:26:39.183 "nvme_io": true, 00:26:39.183 "nvme_io_md": false, 00:26:39.183 "write_zeroes": true, 00:26:39.183 "zcopy": false, 00:26:39.183 "get_zone_info": false, 00:26:39.183 "zone_management": false, 00:26:39.183 "zone_append": false, 00:26:39.183 "compare": true, 00:26:39.183 "compare_and_write": false, 00:26:39.183 "abort": true, 00:26:39.183 "seek_hole": false, 00:26:39.183 "seek_data": false, 00:26:39.183 "copy": true, 00:26:39.183 "nvme_iov_md": false 00:26:39.183 }, 00:26:39.183 "driver_specific": { 00:26:39.183 "nvme": [ 00:26:39.183 { 00:26:39.183 "pci_address": "0000:00:11.0", 00:26:39.183 "trid": { 00:26:39.183 "trtype": "PCIe", 00:26:39.183 "traddr": "0000:00:11.0" 00:26:39.183 }, 00:26:39.183 "ctrlr_data": { 00:26:39.183 "cntlid": 0, 00:26:39.183 "vendor_id": "0x1b36", 00:26:39.183 "model_number": "QEMU NVMe Ctrl", 00:26:39.183 "serial_number": "12341", 00:26:39.183 "firmware_revision": "8.0.0", 00:26:39.183 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:39.183 "oacs": { 00:26:39.183 "security": 0, 00:26:39.183 "format": 1, 00:26:39.183 "firmware": 0, 00:26:39.183 "ns_manage": 1 00:26:39.183 }, 00:26:39.183 "multi_ctrlr": false, 00:26:39.183 "ana_reporting": false 00:26:39.183 }, 00:26:39.183 "vs": { 00:26:39.183 "nvme_version": "1.4" 00:26:39.183 }, 00:26:39.183 "ns_data": { 00:26:39.183 "id": 1, 00:26:39.183 "can_share": false 00:26:39.183 } 00:26:39.183 } 00:26:39.183 ], 00:26:39.183 "mp_policy": "active_passive" 00:26:39.183 } 00:26:39.183 } 00:26:39.183 ]' 00:26:39.183 11:40:44 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:39.183 11:40:44 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:26:39.183 11:40:44 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:39.183 11:40:44 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:39.183 11:40:44 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:39.183 11:40:44 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:26:39.183 11:40:44 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:26:39.183 11:40:44 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:39.183 11:40:44 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:26:39.183 11:40:44 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:39.183 11:40:44 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:39.442 11:40:45 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=3e70d502-71cd-4152-b5e5-6a6a4ace8cc6 00:26:39.442 11:40:45 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:26:39.442 11:40:45 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3e70d502-71cd-4152-b5e5-6a6a4ace8cc6 00:26:39.701 11:40:45 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:39.959 11:40:45 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=7147db90-1f03-4a67-b95c-de504d435f58 00:26:39.959 11:40:45 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7147db90-1f03-4a67-b95c-de504d435f58 00:26:40.218 11:40:45 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=69505874-b3c1-4785-868a-0624fd1e7253 00:26:40.218 11:40:45 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:26:40.218 11:40:45 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 69505874-b3c1-4785-868a-0624fd1e7253 00:26:40.218 11:40:45 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:26:40.218 11:40:45 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:40.218 11:40:45 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=69505874-b3c1-4785-868a-0624fd1e7253 00:26:40.218 11:40:45 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:26:40.218 11:40:45 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 69505874-b3c1-4785-868a-0624fd1e7253 00:26:40.218 11:40:45 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=69505874-b3c1-4785-868a-0624fd1e7253 00:26:40.218 11:40:45 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:40.218 11:40:45 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:26:40.218 11:40:45 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:26:40.218 11:40:45 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 69505874-b3c1-4785-868a-0624fd1e7253 00:26:40.477 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:40.477 { 00:26:40.477 "name": "69505874-b3c1-4785-868a-0624fd1e7253", 00:26:40.477 "aliases": [ 00:26:40.477 "lvs/nvme0n1p0" 00:26:40.477 ], 00:26:40.477 "product_name": "Logical Volume", 00:26:40.477 "block_size": 4096, 00:26:40.477 "num_blocks": 26476544, 00:26:40.477 "uuid": "69505874-b3c1-4785-868a-0624fd1e7253", 00:26:40.477 "assigned_rate_limits": { 00:26:40.477 "rw_ios_per_sec": 0, 00:26:40.477 "rw_mbytes_per_sec": 0, 00:26:40.477 "r_mbytes_per_sec": 0, 00:26:40.477 "w_mbytes_per_sec": 0 00:26:40.477 }, 00:26:40.477 "claimed": false, 00:26:40.477 "zoned": false, 00:26:40.477 "supported_io_types": { 00:26:40.477 "read": true, 00:26:40.477 "write": true, 00:26:40.477 "unmap": true, 00:26:40.477 "flush": false, 00:26:40.477 "reset": true, 00:26:40.477 "nvme_admin": false, 00:26:40.477 "nvme_io": false, 00:26:40.477 "nvme_io_md": false, 00:26:40.477 "write_zeroes": true, 00:26:40.477 "zcopy": false, 00:26:40.477 "get_zone_info": false, 00:26:40.477 "zone_management": false, 00:26:40.477 "zone_append": false, 00:26:40.477 "compare": false, 00:26:40.477 "compare_and_write": false, 00:26:40.477 "abort": false, 00:26:40.477 "seek_hole": true, 00:26:40.477 "seek_data": true, 00:26:40.477 "copy": false, 00:26:40.477 "nvme_iov_md": false 00:26:40.477 }, 00:26:40.477 "driver_specific": { 00:26:40.477 "lvol": { 00:26:40.477 "lvol_store_uuid": "7147db90-1f03-4a67-b95c-de504d435f58", 00:26:40.477 "base_bdev": "nvme0n1", 00:26:40.477 "thin_provision": true, 00:26:40.477 "num_allocated_clusters": 0, 00:26:40.477 "snapshot": false, 00:26:40.477 "clone": false, 00:26:40.477 "esnap_clone": false 00:26:40.477 } 00:26:40.477 } 00:26:40.477 } 00:26:40.477 ]' 00:26:40.477 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:40.477 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:26:40.477 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:40.477 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:40.477 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:40.477 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:26:40.477 11:40:46 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:26:40.477 11:40:46 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:26:40.477 11:40:46 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:40.736 11:40:46 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:40.736 11:40:46 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:40.736 11:40:46 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 69505874-b3c1-4785-868a-0624fd1e7253 00:26:40.736 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=69505874-b3c1-4785-868a-0624fd1e7253 00:26:40.736 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:40.736 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:26:40.736 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:26:40.736 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 69505874-b3c1-4785-868a-0624fd1e7253 00:26:40.995 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:40.995 { 00:26:40.995 "name": "69505874-b3c1-4785-868a-0624fd1e7253", 00:26:40.995 "aliases": [ 00:26:40.995 "lvs/nvme0n1p0" 00:26:40.995 ], 00:26:40.995 "product_name": "Logical Volume", 00:26:40.995 "block_size": 4096, 00:26:40.995 "num_blocks": 26476544, 00:26:40.995 "uuid": "69505874-b3c1-4785-868a-0624fd1e7253", 00:26:40.995 "assigned_rate_limits": { 00:26:40.995 "rw_ios_per_sec": 0, 00:26:40.995 "rw_mbytes_per_sec": 0, 00:26:40.995 "r_mbytes_per_sec": 0, 00:26:40.995 "w_mbytes_per_sec": 0 00:26:40.995 }, 00:26:40.995 "claimed": false, 00:26:40.995 "zoned": false, 00:26:40.995 "supported_io_types": { 00:26:40.995 "read": true, 00:26:40.995 "write": true, 00:26:40.995 "unmap": true, 00:26:40.995 "flush": false, 00:26:40.995 "reset": true, 00:26:40.995 "nvme_admin": false, 00:26:40.995 "nvme_io": false, 00:26:40.995 "nvme_io_md": false, 00:26:40.995 "write_zeroes": true, 00:26:40.995 "zcopy": false, 00:26:40.995 "get_zone_info": false, 00:26:40.995 "zone_management": false, 00:26:40.995 "zone_append": false, 00:26:40.995 "compare": false, 00:26:40.995 "compare_and_write": false, 00:26:40.995 "abort": false, 00:26:40.995 "seek_hole": true, 00:26:40.995 "seek_data": true, 00:26:40.995 "copy": false, 00:26:40.995 "nvme_iov_md": false 00:26:40.995 }, 00:26:40.995 "driver_specific": { 00:26:40.995 "lvol": { 00:26:40.995 "lvol_store_uuid": "7147db90-1f03-4a67-b95c-de504d435f58", 00:26:40.995 "base_bdev": "nvme0n1", 00:26:40.995 "thin_provision": true, 00:26:40.995 "num_allocated_clusters": 0, 00:26:40.995 "snapshot": false, 00:26:40.995 "clone": false, 00:26:40.995 "esnap_clone": false 00:26:40.995 } 00:26:40.995 } 00:26:40.995 } 00:26:40.995 ]' 00:26:40.995 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:40.995 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:26:40.995 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:40.995 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:40.995 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:40.995 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:26:40.995 11:40:46 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:26:40.995 11:40:46 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:41.255 11:40:46 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:26:41.255 11:40:46 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 69505874-b3c1-4785-868a-0624fd1e7253 00:26:41.255 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=69505874-b3c1-4785-868a-0624fd1e7253 00:26:41.255 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:41.255 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:26:41.255 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:26:41.255 11:40:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 69505874-b3c1-4785-868a-0624fd1e7253 00:26:41.514 11:40:47 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:41.514 { 00:26:41.514 "name": "69505874-b3c1-4785-868a-0624fd1e7253", 00:26:41.514 "aliases": [ 00:26:41.514 "lvs/nvme0n1p0" 00:26:41.514 ], 00:26:41.514 "product_name": "Logical Volume", 00:26:41.514 "block_size": 4096, 00:26:41.514 "num_blocks": 26476544, 00:26:41.514 "uuid": "69505874-b3c1-4785-868a-0624fd1e7253", 00:26:41.514 "assigned_rate_limits": { 00:26:41.514 "rw_ios_per_sec": 0, 00:26:41.514 "rw_mbytes_per_sec": 0, 00:26:41.514 "r_mbytes_per_sec": 0, 00:26:41.514 "w_mbytes_per_sec": 0 00:26:41.514 }, 00:26:41.514 "claimed": false, 00:26:41.514 "zoned": false, 00:26:41.514 "supported_io_types": { 00:26:41.514 "read": true, 00:26:41.514 "write": true, 00:26:41.514 "unmap": true, 00:26:41.514 "flush": false, 00:26:41.514 "reset": true, 00:26:41.514 "nvme_admin": false, 00:26:41.514 "nvme_io": false, 00:26:41.514 "nvme_io_md": false, 00:26:41.514 "write_zeroes": true, 00:26:41.514 "zcopy": false, 00:26:41.514 "get_zone_info": false, 00:26:41.514 "zone_management": false, 00:26:41.514 "zone_append": false, 00:26:41.514 "compare": false, 00:26:41.514 "compare_and_write": false, 00:26:41.514 "abort": false, 00:26:41.514 "seek_hole": true, 00:26:41.514 "seek_data": true, 00:26:41.514 "copy": false, 00:26:41.514 "nvme_iov_md": false 00:26:41.514 }, 00:26:41.514 "driver_specific": { 00:26:41.514 "lvol": { 00:26:41.514 "lvol_store_uuid": "7147db90-1f03-4a67-b95c-de504d435f58", 00:26:41.514 "base_bdev": "nvme0n1", 00:26:41.514 "thin_provision": true, 00:26:41.514 "num_allocated_clusters": 0, 00:26:41.514 "snapshot": false, 00:26:41.514 "clone": false, 00:26:41.514 "esnap_clone": false 00:26:41.514 } 00:26:41.514 } 00:26:41.515 } 00:26:41.515 ]' 00:26:41.515 11:40:47 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:41.515 11:40:47 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:26:41.515 11:40:47 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:41.515 11:40:47 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:41.515 11:40:47 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:41.515 11:40:47 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:26:41.515 11:40:47 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:26:41.515 11:40:47 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 69505874-b3c1-4785-868a-0624fd1e7253 --l2p_dram_limit 10' 00:26:41.515 11:40:47 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:26:41.515 11:40:47 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:41.515 11:40:47 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:41.515 11:40:47 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:26:41.515 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:26:41.515 11:40:47 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 69505874-b3c1-4785-868a-0624fd1e7253 --l2p_dram_limit 10 -c nvc0n1p0 00:26:41.775 [2024-11-20 11:40:47.439939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.775 [2024-11-20 11:40:47.439994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:41.775 [2024-11-20 11:40:47.440014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:41.775 [2024-11-20 11:40:47.440025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.775 [2024-11-20 11:40:47.440094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.775 [2024-11-20 11:40:47.440106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:41.775 [2024-11-20 11:40:47.440119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:26:41.775 [2024-11-20 11:40:47.440129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.775 [2024-11-20 11:40:47.440160] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:41.775 [2024-11-20 11:40:47.441207] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:41.775 [2024-11-20 11:40:47.441242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.775 [2024-11-20 11:40:47.441253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:41.775 [2024-11-20 11:40:47.441267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:26:41.775 [2024-11-20 11:40:47.441277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.775 [2024-11-20 11:40:47.441425] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 35c6d13a-9b5f-4be9-a9d4-969633558956 00:26:41.775 [2024-11-20 11:40:47.442850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.775 [2024-11-20 11:40:47.442875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:41.775 [2024-11-20 11:40:47.442887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:41.775 [2024-11-20 11:40:47.442902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.775 [2024-11-20 11:40:47.450352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.775 [2024-11-20 11:40:47.450391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:41.775 [2024-11-20 11:40:47.450407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.398 ms 00:26:41.775 [2024-11-20 11:40:47.450421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.775 [2024-11-20 11:40:47.450556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.775 [2024-11-20 11:40:47.450575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:41.775 [2024-11-20 11:40:47.450586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:26:41.775 [2024-11-20 11:40:47.450604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.775 [2024-11-20 11:40:47.450659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.776 [2024-11-20 11:40:47.450674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:41.776 [2024-11-20 11:40:47.450685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:41.776 [2024-11-20 11:40:47.450702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.776 [2024-11-20 11:40:47.450729] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:41.776 [2024-11-20 11:40:47.455884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.776 [2024-11-20 11:40:47.455918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:41.776 [2024-11-20 11:40:47.455933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.160 ms 00:26:41.776 [2024-11-20 11:40:47.455960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.776 [2024-11-20 11:40:47.455998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.776 [2024-11-20 11:40:47.456010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:41.776 [2024-11-20 11:40:47.456024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:41.776 [2024-11-20 11:40:47.456034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.776 [2024-11-20 11:40:47.456091] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:41.776 [2024-11-20 11:40:47.456221] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:41.776 [2024-11-20 11:40:47.456241] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:41.776 [2024-11-20 11:40:47.456255] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:41.776 [2024-11-20 11:40:47.456272] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:41.776 [2024-11-20 11:40:47.456285] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:41.776 [2024-11-20 11:40:47.456299] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:41.776 [2024-11-20 11:40:47.456309] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:41.776 [2024-11-20 11:40:47.456325] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:41.776 [2024-11-20 11:40:47.456335] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:41.776 [2024-11-20 11:40:47.456348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.776 [2024-11-20 11:40:47.456358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:41.776 [2024-11-20 11:40:47.456371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:26:41.776 [2024-11-20 11:40:47.456394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.776 [2024-11-20 11:40:47.456471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.776 [2024-11-20 11:40:47.456482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:41.776 [2024-11-20 11:40:47.456506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:41.776 [2024-11-20 11:40:47.456516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.776 [2024-11-20 11:40:47.456617] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:41.776 [2024-11-20 11:40:47.456631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:41.776 [2024-11-20 11:40:47.456645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:41.776 [2024-11-20 11:40:47.456655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.776 [2024-11-20 11:40:47.456668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:41.776 [2024-11-20 11:40:47.456678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:41.776 [2024-11-20 11:40:47.456690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:41.776 [2024-11-20 11:40:47.456700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:41.776 [2024-11-20 11:40:47.456712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:41.776 [2024-11-20 11:40:47.456722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:41.776 [2024-11-20 11:40:47.456734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:41.776 [2024-11-20 11:40:47.456744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:41.776 [2024-11-20 11:40:47.456757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:41.776 [2024-11-20 11:40:47.456766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:41.776 [2024-11-20 11:40:47.456778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:41.776 [2024-11-20 11:40:47.456788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.776 [2024-11-20 11:40:47.456804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:41.776 [2024-11-20 11:40:47.456814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:41.776 [2024-11-20 11:40:47.456825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.776 [2024-11-20 11:40:47.456835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:41.776 [2024-11-20 11:40:47.456847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:41.776 [2024-11-20 11:40:47.456857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:41.776 [2024-11-20 11:40:47.456869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:41.776 [2024-11-20 11:40:47.456879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:41.776 [2024-11-20 11:40:47.456891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:41.776 [2024-11-20 11:40:47.456901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:41.776 [2024-11-20 11:40:47.456913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:41.776 [2024-11-20 11:40:47.456922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:41.776 [2024-11-20 11:40:47.456934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:41.776 [2024-11-20 11:40:47.456943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:41.776 [2024-11-20 11:40:47.456955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:41.776 [2024-11-20 11:40:47.456964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:41.776 [2024-11-20 11:40:47.456979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:41.776 [2024-11-20 11:40:47.456988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:41.776 [2024-11-20 11:40:47.457000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:41.776 [2024-11-20 11:40:47.457009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:41.776 [2024-11-20 11:40:47.457021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:41.776 [2024-11-20 11:40:47.457031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:41.776 [2024-11-20 11:40:47.457042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:41.776 [2024-11-20 11:40:47.457052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.776 [2024-11-20 11:40:47.457063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:41.776 [2024-11-20 11:40:47.457073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:41.776 [2024-11-20 11:40:47.457085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.776 [2024-11-20 11:40:47.457094] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:41.776 [2024-11-20 11:40:47.457108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:41.776 [2024-11-20 11:40:47.457126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:41.776 [2024-11-20 11:40:47.457139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.776 [2024-11-20 11:40:47.457150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:41.776 [2024-11-20 11:40:47.457165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:41.776 [2024-11-20 11:40:47.457175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:41.776 [2024-11-20 11:40:47.457187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:41.776 [2024-11-20 11:40:47.457197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:41.776 [2024-11-20 11:40:47.457210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:41.776 [2024-11-20 11:40:47.457224] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:41.776 [2024-11-20 11:40:47.457239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:41.776 [2024-11-20 11:40:47.457254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:41.776 [2024-11-20 11:40:47.457268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:41.776 [2024-11-20 11:40:47.457278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:41.776 [2024-11-20 11:40:47.457291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:41.776 [2024-11-20 11:40:47.457301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:41.776 [2024-11-20 11:40:47.457314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:41.776 [2024-11-20 11:40:47.457325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:41.776 [2024-11-20 11:40:47.457338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:41.776 [2024-11-20 11:40:47.457348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:41.776 [2024-11-20 11:40:47.457364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:41.776 [2024-11-20 11:40:47.457375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:41.776 [2024-11-20 11:40:47.457389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:41.777 [2024-11-20 11:40:47.457400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:41.777 [2024-11-20 11:40:47.457412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:41.777 [2024-11-20 11:40:47.457423] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:41.777 [2024-11-20 11:40:47.457436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:41.777 [2024-11-20 11:40:47.457448] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:41.777 [2024-11-20 11:40:47.457461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:41.777 [2024-11-20 11:40:47.457480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:41.777 [2024-11-20 11:40:47.457494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:41.777 [2024-11-20 11:40:47.457505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.777 [2024-11-20 11:40:47.457519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:41.777 [2024-11-20 11:40:47.457531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:26:41.777 [2024-11-20 11:40:47.457543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.777 [2024-11-20 11:40:47.457587] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:41.777 [2024-11-20 11:40:47.457606] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:45.115 [2024-11-20 11:40:50.387758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.115 [2024-11-20 11:40:50.387970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:45.115 [2024-11-20 11:40:50.388064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2930.154 ms 00:26:45.115 [2024-11-20 11:40:50.388108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.115 [2024-11-20 11:40:50.428549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.115 [2024-11-20 11:40:50.428748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:45.115 [2024-11-20 11:40:50.428940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.018 ms 00:26:45.115 [2024-11-20 11:40:50.428984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.115 [2024-11-20 11:40:50.429182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.115 [2024-11-20 11:40:50.429260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:45.115 [2024-11-20 11:40:50.429347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:45.115 [2024-11-20 11:40:50.429385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.115 [2024-11-20 11:40:50.477652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.115 [2024-11-20 11:40:50.477814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:45.115 [2024-11-20 11:40:50.477901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.193 ms 00:26:45.115 [2024-11-20 11:40:50.477942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.115 [2024-11-20 11:40:50.478011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.115 [2024-11-20 11:40:50.478053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:45.115 [2024-11-20 11:40:50.478132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:45.115 [2024-11-20 11:40:50.478172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.115 [2024-11-20 11:40:50.478732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.115 [2024-11-20 11:40:50.478853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:45.115 [2024-11-20 11:40:50.478937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:26:45.115 [2024-11-20 11:40:50.478977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.115 [2024-11-20 11:40:50.479106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.115 [2024-11-20 11:40:50.479164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:45.115 [2024-11-20 11:40:50.479242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:26:45.115 [2024-11-20 11:40:50.479278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.115 [2024-11-20 11:40:50.498484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.115 [2024-11-20 11:40:50.498644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:45.115 [2024-11-20 11:40:50.498746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.165 ms 00:26:45.115 [2024-11-20 11:40:50.498787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.115 [2024-11-20 11:40:50.511365] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:45.115 [2024-11-20 11:40:50.514662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.115 [2024-11-20 11:40:50.514789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:45.115 [2024-11-20 11:40:50.514909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.753 ms 00:26:45.115 [2024-11-20 11:40:50.514944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.115 [2024-11-20 11:40:50.613848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.115 [2024-11-20 11:40:50.614073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:45.115 [2024-11-20 11:40:50.614159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.843 ms 00:26:45.115 [2024-11-20 11:40:50.614212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.115 [2024-11-20 11:40:50.614436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.115 [2024-11-20 11:40:50.614592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:45.115 [2024-11-20 11:40:50.614676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:26:45.115 [2024-11-20 11:40:50.614708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.115 [2024-11-20 11:40:50.651368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.115 [2024-11-20 11:40:50.651533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:45.115 [2024-11-20 11:40:50.651561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.577 ms 00:26:45.115 [2024-11-20 11:40:50.651572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.115 [2024-11-20 11:40:50.689062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.115 [2024-11-20 11:40:50.689233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:45.115 [2024-11-20 11:40:50.689263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.439 ms 00:26:45.115 [2024-11-20 11:40:50.689274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.115 [2024-11-20 11:40:50.690026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.115 [2024-11-20 11:40:50.690049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:45.115 [2024-11-20 11:40:50.690075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:26:45.115 [2024-11-20 11:40:50.690086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.115 [2024-11-20 11:40:50.794749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.115 [2024-11-20 11:40:50.794811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:45.115 [2024-11-20 11:40:50.794836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.571 ms 00:26:45.115 [2024-11-20 11:40:50.794848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.115 [2024-11-20 11:40:50.834109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.115 [2024-11-20 11:40:50.834320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:45.115 [2024-11-20 11:40:50.834348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.139 ms 00:26:45.115 [2024-11-20 11:40:50.834360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.115 [2024-11-20 11:40:50.873054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.115 [2024-11-20 11:40:50.873243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:45.115 [2024-11-20 11:40:50.873274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.643 ms 00:26:45.115 [2024-11-20 11:40:50.873285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.374 [2024-11-20 11:40:50.910955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.374 [2024-11-20 11:40:50.911003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:45.374 [2024-11-20 11:40:50.911023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.618 ms 00:26:45.374 [2024-11-20 11:40:50.911034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.374 [2024-11-20 11:40:50.911089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.374 [2024-11-20 11:40:50.911102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:45.374 [2024-11-20 11:40:50.911119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:45.374 [2024-11-20 11:40:50.911130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.374 [2024-11-20 11:40:50.911241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.374 [2024-11-20 11:40:50.911254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:45.374 [2024-11-20 11:40:50.911271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:45.374 [2024-11-20 11:40:50.911281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.374 [2024-11-20 11:40:50.912347] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3471.911 ms, result 0 00:26:45.374 { 00:26:45.374 "name": "ftl0", 00:26:45.374 "uuid": "35c6d13a-9b5f-4be9-a9d4-969633558956" 00:26:45.374 } 00:26:45.374 11:40:50 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:26:45.374 11:40:50 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:45.633 11:40:51 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:26:45.633 11:40:51 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:45.892 [2024-11-20 11:40:51.419793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.892 [2024-11-20 11:40:51.419863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:45.892 [2024-11-20 11:40:51.419882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:45.892 [2024-11-20 11:40:51.419907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.892 [2024-11-20 11:40:51.419937] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:45.892 [2024-11-20 11:40:51.424884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.892 [2024-11-20 11:40:51.424923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:45.892 [2024-11-20 11:40:51.424941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.922 ms 00:26:45.892 [2024-11-20 11:40:51.424952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.893 [2024-11-20 11:40:51.425291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.893 [2024-11-20 11:40:51.425311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:45.893 [2024-11-20 11:40:51.425332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:26:45.893 [2024-11-20 11:40:51.425344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.893 [2024-11-20 11:40:51.428142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.893 [2024-11-20 11:40:51.428166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:45.893 [2024-11-20 11:40:51.428180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.776 ms 00:26:45.893 [2024-11-20 11:40:51.428191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.893 [2024-11-20 11:40:51.433955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.893 [2024-11-20 11:40:51.433992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:45.893 [2024-11-20 11:40:51.434013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.739 ms 00:26:45.893 [2024-11-20 11:40:51.434025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.893 [2024-11-20 11:40:51.474013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.893 [2024-11-20 11:40:51.474271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:45.893 [2024-11-20 11:40:51.474304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.920 ms 00:26:45.893 [2024-11-20 11:40:51.474317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.893 [2024-11-20 11:40:51.499316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.893 [2024-11-20 11:40:51.499369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:45.893 [2024-11-20 11:40:51.499389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.933 ms 00:26:45.893 [2024-11-20 11:40:51.499401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.893 [2024-11-20 11:40:51.499625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.893 [2024-11-20 11:40:51.499645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:45.893 [2024-11-20 11:40:51.499661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:26:45.893 [2024-11-20 11:40:51.499671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.893 [2024-11-20 11:40:51.538336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.893 [2024-11-20 11:40:51.538391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:45.893 [2024-11-20 11:40:51.538411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.637 ms 00:26:45.893 [2024-11-20 11:40:51.538422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.893 [2024-11-20 11:40:51.575824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.893 [2024-11-20 11:40:51.576003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:45.893 [2024-11-20 11:40:51.576033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.327 ms 00:26:45.893 [2024-11-20 11:40:51.576044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.893 [2024-11-20 11:40:51.612984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.893 [2024-11-20 11:40:51.613030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:45.893 [2024-11-20 11:40:51.613050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.878 ms 00:26:45.893 [2024-11-20 11:40:51.613061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.893 [2024-11-20 11:40:51.649581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.893 [2024-11-20 11:40:51.649625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:45.893 [2024-11-20 11:40:51.649645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.373 ms 00:26:45.893 [2024-11-20 11:40:51.649655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.893 [2024-11-20 11:40:51.649704] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:45.893 [2024-11-20 11:40:51.649723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.649996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:45.893 [2024-11-20 11:40:51.650389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:45.894 [2024-11-20 11:40:51.650776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:46.154 [2024-11-20 11:40:51.650787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:46.154 [2024-11-20 11:40:51.650800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:46.154 [2024-11-20 11:40:51.650811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:46.154 [2024-11-20 11:40:51.650823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:46.154 [2024-11-20 11:40:51.650834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:46.154 [2024-11-20 11:40:51.650849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:46.154 [2024-11-20 11:40:51.650859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:46.154 [2024-11-20 11:40:51.650872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:46.154 [2024-11-20 11:40:51.650883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:46.154 [2024-11-20 11:40:51.650895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:46.154 [2024-11-20 11:40:51.650907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:46.154 [2024-11-20 11:40:51.650920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:46.154 [2024-11-20 11:40:51.650931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:46.154 [2024-11-20 11:40:51.650944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:46.154 [2024-11-20 11:40:51.650954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:46.154 [2024-11-20 11:40:51.650969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:46.154 [2024-11-20 11:40:51.650980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:46.154 [2024-11-20 11:40:51.650993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:46.154 [2024-11-20 11:40:51.651012] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:46.154 [2024-11-20 11:40:51.651028] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 35c6d13a-9b5f-4be9-a9d4-969633558956 00:26:46.154 [2024-11-20 11:40:51.651039] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:46.154 [2024-11-20 11:40:51.651054] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:46.154 [2024-11-20 11:40:51.651063] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:46.154 [2024-11-20 11:40:51.651079] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:46.154 [2024-11-20 11:40:51.651088] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:46.154 [2024-11-20 11:40:51.651101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:46.154 [2024-11-20 11:40:51.651111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:46.154 [2024-11-20 11:40:51.651123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:46.154 [2024-11-20 11:40:51.651132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:46.154 [2024-11-20 11:40:51.651144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.154 [2024-11-20 11:40:51.651154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:46.154 [2024-11-20 11:40:51.651168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.442 ms 00:26:46.154 [2024-11-20 11:40:51.651178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.154 [2024-11-20 11:40:51.672000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.154 [2024-11-20 11:40:51.672041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:46.154 [2024-11-20 11:40:51.672060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.754 ms 00:26:46.154 [2024-11-20 11:40:51.672071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.154 [2024-11-20 11:40:51.672675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:46.154 [2024-11-20 11:40:51.672688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:46.154 [2024-11-20 11:40:51.672702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:26:46.154 [2024-11-20 11:40:51.672716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.154 [2024-11-20 11:40:51.741287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.154 [2024-11-20 11:40:51.741348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:46.154 [2024-11-20 11:40:51.741369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.154 [2024-11-20 11:40:51.741381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.154 [2024-11-20 11:40:51.741465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.154 [2024-11-20 11:40:51.741499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:46.154 [2024-11-20 11:40:51.741515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.154 [2024-11-20 11:40:51.741529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.154 [2024-11-20 11:40:51.741676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.154 [2024-11-20 11:40:51.741691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:46.154 [2024-11-20 11:40:51.741706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.154 [2024-11-20 11:40:51.741717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.155 [2024-11-20 11:40:51.741746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.155 [2024-11-20 11:40:51.741758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:46.155 [2024-11-20 11:40:51.741772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.155 [2024-11-20 11:40:51.741783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.155 [2024-11-20 11:40:51.874667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.155 [2024-11-20 11:40:51.874730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:46.155 [2024-11-20 11:40:51.874750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.155 [2024-11-20 11:40:51.874761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.414 [2024-11-20 11:40:51.979875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.414 [2024-11-20 11:40:51.979939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:46.414 [2024-11-20 11:40:51.979958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.414 [2024-11-20 11:40:51.979974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.414 [2024-11-20 11:40:51.980097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.414 [2024-11-20 11:40:51.980110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:46.414 [2024-11-20 11:40:51.980123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.414 [2024-11-20 11:40:51.980134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.414 [2024-11-20 11:40:51.980197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.414 [2024-11-20 11:40:51.980210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:46.414 [2024-11-20 11:40:51.980223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.414 [2024-11-20 11:40:51.980233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.414 [2024-11-20 11:40:51.980342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.414 [2024-11-20 11:40:51.980355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:46.414 [2024-11-20 11:40:51.980369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.414 [2024-11-20 11:40:51.980379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.414 [2024-11-20 11:40:51.980421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.414 [2024-11-20 11:40:51.980433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:46.414 [2024-11-20 11:40:51.980446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.414 [2024-11-20 11:40:51.980457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.414 [2024-11-20 11:40:51.980522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.414 [2024-11-20 11:40:51.980538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:46.414 [2024-11-20 11:40:51.980551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.414 [2024-11-20 11:40:51.980562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.414 [2024-11-20 11:40:51.980628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:46.414 [2024-11-20 11:40:51.980640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:46.414 [2024-11-20 11:40:51.980652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:46.414 [2024-11-20 11:40:51.980663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:46.414 [2024-11-20 11:40:51.980799] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 560.970 ms, result 0 00:26:46.414 true 00:26:46.414 11:40:51 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79757 00:26:46.414 11:40:52 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79757 ']' 00:26:46.414 11:40:52 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79757 00:26:46.414 11:40:52 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:26:46.414 11:40:52 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:46.414 11:40:52 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79757 00:26:46.414 11:40:52 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:46.414 11:40:52 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:46.414 11:40:52 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79757' 00:26:46.414 killing process with pid 79757 00:26:46.414 11:40:52 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79757 00:26:46.414 11:40:52 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79757 00:26:52.984 11:40:57 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:57.177 262144+0 records in 00:26:57.177 262144+0 records out 00:26:57.177 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.85844 s, 221 MB/s 00:26:57.177 11:41:02 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:58.554 11:41:04 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:58.554 [2024-11-20 11:41:04.251529] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:26:58.554 [2024-11-20 11:41:04.251697] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80009 ] 00:26:58.838 [2024-11-20 11:41:04.440664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.101 [2024-11-20 11:41:04.604118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.360 [2024-11-20 11:41:04.982576] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:59.360 [2024-11-20 11:41:04.982637] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:59.621 [2024-11-20 11:41:05.150131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.621 [2024-11-20 11:41:05.150420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:59.621 [2024-11-20 11:41:05.150465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:59.621 [2024-11-20 11:41:05.150494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.621 [2024-11-20 11:41:05.150586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.621 [2024-11-20 11:41:05.150599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:59.621 [2024-11-20 11:41:05.150618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:26:59.621 [2024-11-20 11:41:05.150628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.621 [2024-11-20 11:41:05.150652] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:59.621 [2024-11-20 11:41:05.151604] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:59.621 [2024-11-20 11:41:05.151630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.621 [2024-11-20 11:41:05.151642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:59.621 [2024-11-20 11:41:05.151653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:26:59.621 [2024-11-20 11:41:05.151674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.621 [2024-11-20 11:41:05.153210] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:59.621 [2024-11-20 11:41:05.173505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.621 [2024-11-20 11:41:05.173673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:59.621 [2024-11-20 11:41:05.173785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.295 ms 00:26:59.621 [2024-11-20 11:41:05.173804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.621 [2024-11-20 11:41:05.173880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.621 [2024-11-20 11:41:05.173894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:59.621 [2024-11-20 11:41:05.173906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:59.621 [2024-11-20 11:41:05.173916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.621 [2024-11-20 11:41:05.180876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.621 [2024-11-20 11:41:05.180913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:59.621 [2024-11-20 11:41:05.180927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.867 ms 00:26:59.621 [2024-11-20 11:41:05.180953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.621 [2024-11-20 11:41:05.181046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.621 [2024-11-20 11:41:05.181060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:59.621 [2024-11-20 11:41:05.181071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:26:59.621 [2024-11-20 11:41:05.181082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.621 [2024-11-20 11:41:05.181139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.621 [2024-11-20 11:41:05.181152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:59.621 [2024-11-20 11:41:05.181163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:59.621 [2024-11-20 11:41:05.181173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.621 [2024-11-20 11:41:05.181201] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:59.621 [2024-11-20 11:41:05.186123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.621 [2024-11-20 11:41:05.186157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:59.621 [2024-11-20 11:41:05.186170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.929 ms 00:26:59.621 [2024-11-20 11:41:05.186184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.621 [2024-11-20 11:41:05.186218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.621 [2024-11-20 11:41:05.186229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:59.621 [2024-11-20 11:41:05.186250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:59.621 [2024-11-20 11:41:05.186260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.621 [2024-11-20 11:41:05.186319] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:59.621 [2024-11-20 11:41:05.186345] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:59.621 [2024-11-20 11:41:05.186381] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:59.621 [2024-11-20 11:41:05.186402] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:59.621 [2024-11-20 11:41:05.186511] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:59.621 [2024-11-20 11:41:05.186526] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:59.621 [2024-11-20 11:41:05.186540] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:59.621 [2024-11-20 11:41:05.186553] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:59.621 [2024-11-20 11:41:05.186565] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:59.621 [2024-11-20 11:41:05.186577] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:59.621 [2024-11-20 11:41:05.186588] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:59.621 [2024-11-20 11:41:05.186598] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:59.621 [2024-11-20 11:41:05.186608] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:59.621 [2024-11-20 11:41:05.186622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.621 [2024-11-20 11:41:05.186632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:59.621 [2024-11-20 11:41:05.186643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:26:59.621 [2024-11-20 11:41:05.186653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.621 [2024-11-20 11:41:05.186730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.621 [2024-11-20 11:41:05.186741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:59.621 [2024-11-20 11:41:05.186752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:59.621 [2024-11-20 11:41:05.186762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.621 [2024-11-20 11:41:05.186858] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:59.621 [2024-11-20 11:41:05.186876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:59.621 [2024-11-20 11:41:05.186887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:59.621 [2024-11-20 11:41:05.186898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:59.621 [2024-11-20 11:41:05.186914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:59.621 [2024-11-20 11:41:05.186927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:59.622 [2024-11-20 11:41:05.186936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:59.622 [2024-11-20 11:41:05.186946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:59.622 [2024-11-20 11:41:05.186956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:59.622 [2024-11-20 11:41:05.186965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:59.622 [2024-11-20 11:41:05.186975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:59.622 [2024-11-20 11:41:05.186986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:59.622 [2024-11-20 11:41:05.186999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:59.622 [2024-11-20 11:41:05.187014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:59.622 [2024-11-20 11:41:05.187031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:59.622 [2024-11-20 11:41:05.187055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:59.622 [2024-11-20 11:41:05.187068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:59.622 [2024-11-20 11:41:05.187080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:59.622 [2024-11-20 11:41:05.187094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:59.622 [2024-11-20 11:41:05.187110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:59.622 [2024-11-20 11:41:05.187119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:59.622 [2024-11-20 11:41:05.187129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:59.622 [2024-11-20 11:41:05.187138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:59.622 [2024-11-20 11:41:05.187148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:59.622 [2024-11-20 11:41:05.187157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:59.622 [2024-11-20 11:41:05.187166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:59.622 [2024-11-20 11:41:05.187176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:59.622 [2024-11-20 11:41:05.187185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:59.622 [2024-11-20 11:41:05.187194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:59.622 [2024-11-20 11:41:05.187204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:59.622 [2024-11-20 11:41:05.187213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:59.622 [2024-11-20 11:41:05.187222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:59.622 [2024-11-20 11:41:05.187232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:59.622 [2024-11-20 11:41:05.187242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:59.622 [2024-11-20 11:41:05.187256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:59.622 [2024-11-20 11:41:05.187265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:59.622 [2024-11-20 11:41:05.187274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:59.622 [2024-11-20 11:41:05.187284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:59.622 [2024-11-20 11:41:05.187293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:59.622 [2024-11-20 11:41:05.187302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:59.622 [2024-11-20 11:41:05.187311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:59.622 [2024-11-20 11:41:05.187320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:59.622 [2024-11-20 11:41:05.187330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:59.622 [2024-11-20 11:41:05.187341] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:59.622 [2024-11-20 11:41:05.187352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:59.622 [2024-11-20 11:41:05.187368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:59.622 [2024-11-20 11:41:05.187385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:59.622 [2024-11-20 11:41:05.187401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:59.622 [2024-11-20 11:41:05.187413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:59.622 [2024-11-20 11:41:05.187426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:59.622 [2024-11-20 11:41:05.187438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:59.622 [2024-11-20 11:41:05.187450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:59.622 [2024-11-20 11:41:05.187463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:59.622 [2024-11-20 11:41:05.187493] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:59.622 [2024-11-20 11:41:05.187507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:59.622 [2024-11-20 11:41:05.187519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:59.622 [2024-11-20 11:41:05.187530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:59.622 [2024-11-20 11:41:05.187541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:59.622 [2024-11-20 11:41:05.187551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:59.622 [2024-11-20 11:41:05.187561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:59.622 [2024-11-20 11:41:05.187572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:59.622 [2024-11-20 11:41:05.187582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:59.622 [2024-11-20 11:41:05.187594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:59.622 [2024-11-20 11:41:05.187611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:59.622 [2024-11-20 11:41:05.187625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:59.622 [2024-11-20 11:41:05.187639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:59.622 [2024-11-20 11:41:05.187652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:59.622 [2024-11-20 11:41:05.187669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:59.622 [2024-11-20 11:41:05.187683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:59.622 [2024-11-20 11:41:05.187696] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:59.622 [2024-11-20 11:41:05.187715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:59.622 [2024-11-20 11:41:05.187730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:59.622 [2024-11-20 11:41:05.187747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:59.622 [2024-11-20 11:41:05.187766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:59.622 [2024-11-20 11:41:05.187781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:59.622 [2024-11-20 11:41:05.187796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.622 [2024-11-20 11:41:05.187810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:59.622 [2024-11-20 11:41:05.187824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:26:59.622 [2024-11-20 11:41:05.187838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.622 [2024-11-20 11:41:05.228902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.622 [2024-11-20 11:41:05.228960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:59.622 [2024-11-20 11:41:05.228977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.998 ms 00:26:59.622 [2024-11-20 11:41:05.228988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.622 [2024-11-20 11:41:05.229097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.622 [2024-11-20 11:41:05.229108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:59.622 [2024-11-20 11:41:05.229127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:59.622 [2024-11-20 11:41:05.229138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.622 [2024-11-20 11:41:05.286183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.622 [2024-11-20 11:41:05.286242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:59.622 [2024-11-20 11:41:05.286258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.957 ms 00:26:59.622 [2024-11-20 11:41:05.286268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.622 [2024-11-20 11:41:05.286333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.622 [2024-11-20 11:41:05.286345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:59.622 [2024-11-20 11:41:05.286356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:59.622 [2024-11-20 11:41:05.286371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.623 [2024-11-20 11:41:05.286891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.623 [2024-11-20 11:41:05.286907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:59.623 [2024-11-20 11:41:05.286919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:26:59.623 [2024-11-20 11:41:05.286929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.623 [2024-11-20 11:41:05.287074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.623 [2024-11-20 11:41:05.287096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:59.623 [2024-11-20 11:41:05.287114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:26:59.623 [2024-11-20 11:41:05.287133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.623 [2024-11-20 11:41:05.307003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.623 [2024-11-20 11:41:05.307051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:59.623 [2024-11-20 11:41:05.307087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.836 ms 00:26:59.623 [2024-11-20 11:41:05.307098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.623 [2024-11-20 11:41:05.327286] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:59.623 [2024-11-20 11:41:05.327339] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:59.623 [2024-11-20 11:41:05.327355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.623 [2024-11-20 11:41:05.327367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:59.623 [2024-11-20 11:41:05.327379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.111 ms 00:26:59.623 [2024-11-20 11:41:05.327389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.623 [2024-11-20 11:41:05.358041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.623 [2024-11-20 11:41:05.358093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:59.623 [2024-11-20 11:41:05.358116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.598 ms 00:26:59.623 [2024-11-20 11:41:05.358127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.623 [2024-11-20 11:41:05.376845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.623 [2024-11-20 11:41:05.376908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:59.623 [2024-11-20 11:41:05.376927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.661 ms 00:26:59.623 [2024-11-20 11:41:05.376942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.882 [2024-11-20 11:41:05.396811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.882 [2024-11-20 11:41:05.396858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:59.882 [2024-11-20 11:41:05.396873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.814 ms 00:26:59.882 [2024-11-20 11:41:05.396884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.882 [2024-11-20 11:41:05.397788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.882 [2024-11-20 11:41:05.397822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:59.882 [2024-11-20 11:41:05.397839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:26:59.882 [2024-11-20 11:41:05.397852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.882 [2024-11-20 11:41:05.485845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.882 [2024-11-20 11:41:05.485915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:59.882 [2024-11-20 11:41:05.485933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.950 ms 00:26:59.882 [2024-11-20 11:41:05.485956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.882 [2024-11-20 11:41:05.497907] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:59.882 [2024-11-20 11:41:05.501168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.882 [2024-11-20 11:41:05.501332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:59.882 [2024-11-20 11:41:05.501359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.140 ms 00:26:59.882 [2024-11-20 11:41:05.501374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.882 [2024-11-20 11:41:05.501519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.882 [2024-11-20 11:41:05.501536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:59.882 [2024-11-20 11:41:05.501549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:59.882 [2024-11-20 11:41:05.501559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.882 [2024-11-20 11:41:05.501673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.882 [2024-11-20 11:41:05.501688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:59.882 [2024-11-20 11:41:05.501699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:59.882 [2024-11-20 11:41:05.501709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.882 [2024-11-20 11:41:05.501733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.882 [2024-11-20 11:41:05.501744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:59.882 [2024-11-20 11:41:05.501755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:59.882 [2024-11-20 11:41:05.501766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.882 [2024-11-20 11:41:05.501805] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:59.882 [2024-11-20 11:41:05.501818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.882 [2024-11-20 11:41:05.501834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:59.882 [2024-11-20 11:41:05.501845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:59.882 [2024-11-20 11:41:05.501855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.882 [2024-11-20 11:41:05.539452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.882 [2024-11-20 11:41:05.539516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:59.882 [2024-11-20 11:41:05.539532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.572 ms 00:26:59.882 [2024-11-20 11:41:05.539544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.882 [2024-11-20 11:41:05.539651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.882 [2024-11-20 11:41:05.539666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:59.882 [2024-11-20 11:41:05.539677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:26:59.882 [2024-11-20 11:41:05.539687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.882 [2024-11-20 11:41:05.541309] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 390.315 ms, result 0 00:27:00.819  [2024-11-20T11:41:07.959Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-20T11:41:08.897Z] Copying: 53/1024 [MB] (27 MBps) [2024-11-20T11:41:09.851Z] Copying: 82/1024 [MB] (28 MBps) [2024-11-20T11:41:10.788Z] Copying: 112/1024 [MB] (30 MBps) [2024-11-20T11:41:11.726Z] Copying: 143/1024 [MB] (30 MBps) [2024-11-20T11:41:12.661Z] Copying: 176/1024 [MB] (32 MBps) [2024-11-20T11:41:13.600Z] Copying: 208/1024 [MB] (32 MBps) [2024-11-20T11:41:14.978Z] Copying: 239/1024 [MB] (31 MBps) [2024-11-20T11:41:15.914Z] Copying: 272/1024 [MB] (32 MBps) [2024-11-20T11:41:16.851Z] Copying: 304/1024 [MB] (31 MBps) [2024-11-20T11:41:17.785Z] Copying: 336/1024 [MB] (32 MBps) [2024-11-20T11:41:18.720Z] Copying: 367/1024 [MB] (31 MBps) [2024-11-20T11:41:19.656Z] Copying: 399/1024 [MB] (32 MBps) [2024-11-20T11:41:20.627Z] Copying: 431/1024 [MB] (31 MBps) [2024-11-20T11:41:21.565Z] Copying: 460/1024 [MB] (29 MBps) [2024-11-20T11:41:22.945Z] Copying: 491/1024 [MB] (30 MBps) [2024-11-20T11:41:23.881Z] Copying: 520/1024 [MB] (29 MBps) [2024-11-20T11:41:24.817Z] Copying: 550/1024 [MB] (29 MBps) [2024-11-20T11:41:25.753Z] Copying: 580/1024 [MB] (29 MBps) [2024-11-20T11:41:26.689Z] Copying: 610/1024 [MB] (30 MBps) [2024-11-20T11:41:27.626Z] Copying: 640/1024 [MB] (30 MBps) [2024-11-20T11:41:28.562Z] Copying: 670/1024 [MB] (30 MBps) [2024-11-20T11:41:29.940Z] Copying: 701/1024 [MB] (30 MBps) [2024-11-20T11:41:30.877Z] Copying: 731/1024 [MB] (30 MBps) [2024-11-20T11:41:31.811Z] Copying: 762/1024 [MB] (30 MBps) [2024-11-20T11:41:32.747Z] Copying: 792/1024 [MB] (29 MBps) [2024-11-20T11:41:33.740Z] Copying: 821/1024 [MB] (29 MBps) [2024-11-20T11:41:34.675Z] Copying: 852/1024 [MB] (30 MBps) [2024-11-20T11:41:35.610Z] Copying: 883/1024 [MB] (31 MBps) [2024-11-20T11:41:36.986Z] Copying: 914/1024 [MB] (30 MBps) [2024-11-20T11:41:37.922Z] Copying: 944/1024 [MB] (30 MBps) [2024-11-20T11:41:38.857Z] Copying: 973/1024 [MB] (28 MBps) [2024-11-20T11:41:39.424Z] Copying: 1003/1024 [MB] (29 MBps) [2024-11-20T11:41:39.424Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-20 11:41:39.259228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.662 [2024-11-20 11:41:39.259285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:33.662 [2024-11-20 11:41:39.259302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:33.662 [2024-11-20 11:41:39.259313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.662 [2024-11-20 11:41:39.259336] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:33.662 [2024-11-20 11:41:39.263472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.662 [2024-11-20 11:41:39.263513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:33.662 [2024-11-20 11:41:39.263526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.119 ms 00:27:33.662 [2024-11-20 11:41:39.263537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.662 [2024-11-20 11:41:39.265358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.662 [2024-11-20 11:41:39.265399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:33.662 [2024-11-20 11:41:39.265412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.790 ms 00:27:33.662 [2024-11-20 11:41:39.265423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.662 [2024-11-20 11:41:39.280287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.662 [2024-11-20 11:41:39.280439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:33.662 [2024-11-20 11:41:39.280479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.845 ms 00:27:33.662 [2024-11-20 11:41:39.280503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.662 [2024-11-20 11:41:39.285695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.662 [2024-11-20 11:41:39.285737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:33.662 [2024-11-20 11:41:39.285749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.150 ms 00:27:33.662 [2024-11-20 11:41:39.285760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.662 [2024-11-20 11:41:39.324308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.662 [2024-11-20 11:41:39.324370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:33.662 [2024-11-20 11:41:39.324386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.483 ms 00:27:33.662 [2024-11-20 11:41:39.324396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.662 [2024-11-20 11:41:39.346150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.662 [2024-11-20 11:41:39.346202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:33.662 [2024-11-20 11:41:39.346218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.712 ms 00:27:33.662 [2024-11-20 11:41:39.346230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.662 [2024-11-20 11:41:39.346366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.662 [2024-11-20 11:41:39.346381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:33.662 [2024-11-20 11:41:39.346400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:27:33.662 [2024-11-20 11:41:39.346411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.662 [2024-11-20 11:41:39.384929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.662 [2024-11-20 11:41:39.385083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:33.662 [2024-11-20 11:41:39.385113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.499 ms 00:27:33.662 [2024-11-20 11:41:39.385124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.921 [2024-11-20 11:41:39.422001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.921 [2024-11-20 11:41:39.422044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:33.921 [2024-11-20 11:41:39.422073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.837 ms 00:27:33.921 [2024-11-20 11:41:39.422084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.921 [2024-11-20 11:41:39.460027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.921 [2024-11-20 11:41:39.460072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:33.921 [2024-11-20 11:41:39.460087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.899 ms 00:27:33.921 [2024-11-20 11:41:39.460099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.921 [2024-11-20 11:41:39.499370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.921 [2024-11-20 11:41:39.499421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:33.921 [2024-11-20 11:41:39.499436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.181 ms 00:27:33.921 [2024-11-20 11:41:39.499447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.921 [2024-11-20 11:41:39.499503] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:33.921 [2024-11-20 11:41:39.499522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:33.921 [2024-11-20 11:41:39.499535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:33.921 [2024-11-20 11:41:39.499546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:33.921 [2024-11-20 11:41:39.499558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:33.921 [2024-11-20 11:41:39.499569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:33.921 [2024-11-20 11:41:39.499580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:33.921 [2024-11-20 11:41:39.499609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:33.921 [2024-11-20 11:41:39.499620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:33.921 [2024-11-20 11:41:39.499632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:33.921 [2024-11-20 11:41:39.499643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:33.921 [2024-11-20 11:41:39.499655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.499994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:33.922 [2024-11-20 11:41:39.500635] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:33.923 [2024-11-20 11:41:39.500649] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 35c6d13a-9b5f-4be9-a9d4-969633558956 00:27:33.923 [2024-11-20 11:41:39.500660] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:33.923 [2024-11-20 11:41:39.500673] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:33.923 [2024-11-20 11:41:39.500683] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:33.923 [2024-11-20 11:41:39.500693] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:33.923 [2024-11-20 11:41:39.500702] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:33.923 [2024-11-20 11:41:39.500712] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:33.923 [2024-11-20 11:41:39.500722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:33.923 [2024-11-20 11:41:39.500742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:33.923 [2024-11-20 11:41:39.500751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:33.923 [2024-11-20 11:41:39.500761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.923 [2024-11-20 11:41:39.500772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:33.923 [2024-11-20 11:41:39.500783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:27:33.923 [2024-11-20 11:41:39.500792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.923 [2024-11-20 11:41:39.521838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.923 [2024-11-20 11:41:39.521882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:33.923 [2024-11-20 11:41:39.521896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.007 ms 00:27:33.923 [2024-11-20 11:41:39.521907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.923 [2024-11-20 11:41:39.522506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.923 [2024-11-20 11:41:39.522522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:33.923 [2024-11-20 11:41:39.522533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:27:33.923 [2024-11-20 11:41:39.522544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.923 [2024-11-20 11:41:39.577086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.923 [2024-11-20 11:41:39.577341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:33.923 [2024-11-20 11:41:39.577370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.923 [2024-11-20 11:41:39.577383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.923 [2024-11-20 11:41:39.577460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.923 [2024-11-20 11:41:39.577473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:33.923 [2024-11-20 11:41:39.577501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.923 [2024-11-20 11:41:39.577514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.923 [2024-11-20 11:41:39.577618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.923 [2024-11-20 11:41:39.577634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:33.923 [2024-11-20 11:41:39.577647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.923 [2024-11-20 11:41:39.577659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.923 [2024-11-20 11:41:39.577679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.923 [2024-11-20 11:41:39.577691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:33.923 [2024-11-20 11:41:39.577703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.923 [2024-11-20 11:41:39.577714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.182 [2024-11-20 11:41:39.707829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.182 [2024-11-20 11:41:39.708053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:34.182 [2024-11-20 11:41:39.708077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.182 [2024-11-20 11:41:39.708089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.182 [2024-11-20 11:41:39.813684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.182 [2024-11-20 11:41:39.813740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:34.182 [2024-11-20 11:41:39.813755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.182 [2024-11-20 11:41:39.813767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.182 [2024-11-20 11:41:39.813865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.182 [2024-11-20 11:41:39.813882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:34.182 [2024-11-20 11:41:39.813893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.182 [2024-11-20 11:41:39.813903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.182 [2024-11-20 11:41:39.813948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.182 [2024-11-20 11:41:39.813959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:34.182 [2024-11-20 11:41:39.813970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.182 [2024-11-20 11:41:39.813980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.182 [2024-11-20 11:41:39.814087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.182 [2024-11-20 11:41:39.814104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:34.182 [2024-11-20 11:41:39.814115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.182 [2024-11-20 11:41:39.814125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.182 [2024-11-20 11:41:39.814161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.182 [2024-11-20 11:41:39.814174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:34.182 [2024-11-20 11:41:39.814184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.182 [2024-11-20 11:41:39.814194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.182 [2024-11-20 11:41:39.814232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.182 [2024-11-20 11:41:39.814243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:34.182 [2024-11-20 11:41:39.814257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.182 [2024-11-20 11:41:39.814268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.182 [2024-11-20 11:41:39.814310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:34.182 [2024-11-20 11:41:39.814322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:34.182 [2024-11-20 11:41:39.814332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:34.182 [2024-11-20 11:41:39.814342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.182 [2024-11-20 11:41:39.814459] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 555.194 ms, result 0 00:27:35.557 00:27:35.557 00:27:35.815 11:41:41 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:27:35.815 [2024-11-20 11:41:41.463448] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:27:35.815 [2024-11-20 11:41:41.463638] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80378 ] 00:27:36.073 [2024-11-20 11:41:41.658795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.073 [2024-11-20 11:41:41.783785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.642 [2024-11-20 11:41:42.169132] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:36.642 [2024-11-20 11:41:42.169222] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:36.642 [2024-11-20 11:41:42.334237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.642 [2024-11-20 11:41:42.334429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:36.642 [2024-11-20 11:41:42.334463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:36.642 [2024-11-20 11:41:42.334488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.642 [2024-11-20 11:41:42.334572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.642 [2024-11-20 11:41:42.334586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:36.642 [2024-11-20 11:41:42.334602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:36.642 [2024-11-20 11:41:42.334613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.642 [2024-11-20 11:41:42.334637] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:36.642 [2024-11-20 11:41:42.335709] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:36.642 [2024-11-20 11:41:42.335737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.642 [2024-11-20 11:41:42.335748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:36.642 [2024-11-20 11:41:42.335759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.105 ms 00:27:36.642 [2024-11-20 11:41:42.335769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.642 [2024-11-20 11:41:42.337369] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:36.642 [2024-11-20 11:41:42.357728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.642 [2024-11-20 11:41:42.357869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:36.642 [2024-11-20 11:41:42.357892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.359 ms 00:27:36.642 [2024-11-20 11:41:42.357903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.642 [2024-11-20 11:41:42.357991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.642 [2024-11-20 11:41:42.358005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:36.642 [2024-11-20 11:41:42.358016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:36.642 [2024-11-20 11:41:42.358026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.642 [2024-11-20 11:41:42.365048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.642 [2024-11-20 11:41:42.365081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:36.642 [2024-11-20 11:41:42.365093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.946 ms 00:27:36.642 [2024-11-20 11:41:42.365126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.642 [2024-11-20 11:41:42.365212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.642 [2024-11-20 11:41:42.365227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:36.642 [2024-11-20 11:41:42.365237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:27:36.642 [2024-11-20 11:41:42.365248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.642 [2024-11-20 11:41:42.365292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.642 [2024-11-20 11:41:42.365304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:36.642 [2024-11-20 11:41:42.365315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:36.642 [2024-11-20 11:41:42.365325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.642 [2024-11-20 11:41:42.365351] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:36.642 [2024-11-20 11:41:42.370325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.642 [2024-11-20 11:41:42.370356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:36.642 [2024-11-20 11:41:42.370369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.981 ms 00:27:36.642 [2024-11-20 11:41:42.370383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.642 [2024-11-20 11:41:42.370415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.642 [2024-11-20 11:41:42.370426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:36.642 [2024-11-20 11:41:42.370437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:36.642 [2024-11-20 11:41:42.370447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.642 [2024-11-20 11:41:42.370516] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:36.642 [2024-11-20 11:41:42.370540] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:36.642 [2024-11-20 11:41:42.370576] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:36.642 [2024-11-20 11:41:42.370597] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:36.642 [2024-11-20 11:41:42.370687] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:36.642 [2024-11-20 11:41:42.370701] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:36.642 [2024-11-20 11:41:42.370714] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:36.642 [2024-11-20 11:41:42.370736] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:36.642 [2024-11-20 11:41:42.370748] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:36.642 [2024-11-20 11:41:42.370760] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:36.642 [2024-11-20 11:41:42.370770] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:36.642 [2024-11-20 11:41:42.370780] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:36.642 [2024-11-20 11:41:42.370790] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:36.642 [2024-11-20 11:41:42.370805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.642 [2024-11-20 11:41:42.370815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:36.642 [2024-11-20 11:41:42.370826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:27:36.642 [2024-11-20 11:41:42.370835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.642 [2024-11-20 11:41:42.370912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.642 [2024-11-20 11:41:42.370923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:36.642 [2024-11-20 11:41:42.370933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:36.642 [2024-11-20 11:41:42.370943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.642 [2024-11-20 11:41:42.371038] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:36.642 [2024-11-20 11:41:42.371055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:36.642 [2024-11-20 11:41:42.371066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:36.642 [2024-11-20 11:41:42.371076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:36.642 [2024-11-20 11:41:42.371087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:36.642 [2024-11-20 11:41:42.371096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:36.642 [2024-11-20 11:41:42.371106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:36.642 [2024-11-20 11:41:42.371117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:36.642 [2024-11-20 11:41:42.371126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:36.642 [2024-11-20 11:41:42.371136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:36.642 [2024-11-20 11:41:42.371145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:36.642 [2024-11-20 11:41:42.371156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:36.642 [2024-11-20 11:41:42.371165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:36.642 [2024-11-20 11:41:42.371175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:36.642 [2024-11-20 11:41:42.371185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:36.642 [2024-11-20 11:41:42.371203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:36.642 [2024-11-20 11:41:42.371212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:36.642 [2024-11-20 11:41:42.371222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:36.642 [2024-11-20 11:41:42.371231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:36.642 [2024-11-20 11:41:42.371240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:36.642 [2024-11-20 11:41:42.371250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:36.642 [2024-11-20 11:41:42.371259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:36.642 [2024-11-20 11:41:42.371269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:36.642 [2024-11-20 11:41:42.371278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:36.642 [2024-11-20 11:41:42.371287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:36.642 [2024-11-20 11:41:42.371297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:36.642 [2024-11-20 11:41:42.371306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:36.642 [2024-11-20 11:41:42.371316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:36.642 [2024-11-20 11:41:42.371325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:36.642 [2024-11-20 11:41:42.371334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:36.642 [2024-11-20 11:41:42.371343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:36.642 [2024-11-20 11:41:42.371353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:36.642 [2024-11-20 11:41:42.371363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:36.643 [2024-11-20 11:41:42.371372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:36.643 [2024-11-20 11:41:42.371381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:36.643 [2024-11-20 11:41:42.371390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:36.643 [2024-11-20 11:41:42.371400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:36.643 [2024-11-20 11:41:42.371409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:36.643 [2024-11-20 11:41:42.371418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:36.643 [2024-11-20 11:41:42.371427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:36.643 [2024-11-20 11:41:42.371436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:36.643 [2024-11-20 11:41:42.371446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:36.643 [2024-11-20 11:41:42.371455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:36.643 [2024-11-20 11:41:42.371465] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:36.643 [2024-11-20 11:41:42.371744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:36.643 [2024-11-20 11:41:42.371781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:36.643 [2024-11-20 11:41:42.371811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:36.643 [2024-11-20 11:41:42.371881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:36.643 [2024-11-20 11:41:42.371916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:36.643 [2024-11-20 11:41:42.371946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:36.643 [2024-11-20 11:41:42.371977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:36.643 [2024-11-20 11:41:42.372006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:36.643 [2024-11-20 11:41:42.372074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:36.643 [2024-11-20 11:41:42.372112] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:36.643 [2024-11-20 11:41:42.372210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:36.643 [2024-11-20 11:41:42.372265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:36.643 [2024-11-20 11:41:42.372352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:36.643 [2024-11-20 11:41:42.372444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:36.643 [2024-11-20 11:41:42.372513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:36.643 [2024-11-20 11:41:42.372603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:36.643 [2024-11-20 11:41:42.372787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:36.643 [2024-11-20 11:41:42.372837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:36.643 [2024-11-20 11:41:42.372886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:36.643 [2024-11-20 11:41:42.372970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:36.643 [2024-11-20 11:41:42.373024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:36.643 [2024-11-20 11:41:42.373037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:36.643 [2024-11-20 11:41:42.373047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:36.643 [2024-11-20 11:41:42.373057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:36.643 [2024-11-20 11:41:42.373068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:36.643 [2024-11-20 11:41:42.373078] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:36.643 [2024-11-20 11:41:42.373096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:36.643 [2024-11-20 11:41:42.373123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:36.643 [2024-11-20 11:41:42.373139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:36.643 [2024-11-20 11:41:42.373150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:36.643 [2024-11-20 11:41:42.373160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:36.643 [2024-11-20 11:41:42.373174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.643 [2024-11-20 11:41:42.373185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:36.643 [2024-11-20 11:41:42.373196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.191 ms 00:27:36.643 [2024-11-20 11:41:42.373206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.935 [2024-11-20 11:41:42.415796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.935 [2024-11-20 11:41:42.415842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:36.935 [2024-11-20 11:41:42.415860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.520 ms 00:27:36.935 [2024-11-20 11:41:42.415874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.935 [2024-11-20 11:41:42.415973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.935 [2024-11-20 11:41:42.415987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:36.935 [2024-11-20 11:41:42.416001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:36.935 [2024-11-20 11:41:42.416015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.935 [2024-11-20 11:41:42.480936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.935 [2024-11-20 11:41:42.481128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:36.935 [2024-11-20 11:41:42.481180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.842 ms 00:27:36.935 [2024-11-20 11:41:42.481197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.935 [2024-11-20 11:41:42.481259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.935 [2024-11-20 11:41:42.481275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:36.935 [2024-11-20 11:41:42.481292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:36.935 [2024-11-20 11:41:42.481314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.935 [2024-11-20 11:41:42.481873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.935 [2024-11-20 11:41:42.481897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:36.935 [2024-11-20 11:41:42.481916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:27:36.935 [2024-11-20 11:41:42.481933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.935 [2024-11-20 11:41:42.482111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.935 [2024-11-20 11:41:42.482144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:36.935 [2024-11-20 11:41:42.482161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:27:36.935 [2024-11-20 11:41:42.482183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.935 [2024-11-20 11:41:42.505240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.935 [2024-11-20 11:41:42.505281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:36.935 [2024-11-20 11:41:42.505302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.027 ms 00:27:36.935 [2024-11-20 11:41:42.505314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.935 [2024-11-20 11:41:42.526370] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:36.935 [2024-11-20 11:41:42.526414] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:36.935 [2024-11-20 11:41:42.526433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.935 [2024-11-20 11:41:42.526446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:36.935 [2024-11-20 11:41:42.526460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.981 ms 00:27:36.935 [2024-11-20 11:41:42.526485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.935 [2024-11-20 11:41:42.559447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.935 [2024-11-20 11:41:42.559536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:36.935 [2024-11-20 11:41:42.559559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.913 ms 00:27:36.935 [2024-11-20 11:41:42.559577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.935 [2024-11-20 11:41:42.579785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.935 [2024-11-20 11:41:42.579828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:36.935 [2024-11-20 11:41:42.579843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.142 ms 00:27:36.935 [2024-11-20 11:41:42.579855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.935 [2024-11-20 11:41:42.600068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.935 [2024-11-20 11:41:42.600117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:36.935 [2024-11-20 11:41:42.600139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.167 ms 00:27:36.935 [2024-11-20 11:41:42.600156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:36.936 [2024-11-20 11:41:42.601165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:36.936 [2024-11-20 11:41:42.601315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:36.936 [2024-11-20 11:41:42.601340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:27:36.936 [2024-11-20 11:41:42.601359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.193 [2024-11-20 11:41:42.701890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.193 [2024-11-20 11:41:42.701960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:37.193 [2024-11-20 11:41:42.702009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.500 ms 00:27:37.193 [2024-11-20 11:41:42.702022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.193 [2024-11-20 11:41:42.715153] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:37.193 [2024-11-20 11:41:42.718640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.193 [2024-11-20 11:41:42.718679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:37.193 [2024-11-20 11:41:42.718696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.544 ms 00:27:37.193 [2024-11-20 11:41:42.718709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.193 [2024-11-20 11:41:42.718837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.193 [2024-11-20 11:41:42.718853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:37.193 [2024-11-20 11:41:42.718867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:37.193 [2024-11-20 11:41:42.718887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.193 [2024-11-20 11:41:42.718972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.193 [2024-11-20 11:41:42.718987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:37.193 [2024-11-20 11:41:42.719000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:37.193 [2024-11-20 11:41:42.719012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.193 [2024-11-20 11:41:42.719038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.193 [2024-11-20 11:41:42.719050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:37.193 [2024-11-20 11:41:42.719063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:37.193 [2024-11-20 11:41:42.719074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.193 [2024-11-20 11:41:42.719116] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:37.193 [2024-11-20 11:41:42.719136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.193 [2024-11-20 11:41:42.719148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:37.193 [2024-11-20 11:41:42.719160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:37.193 [2024-11-20 11:41:42.719172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.193 [2024-11-20 11:41:42.762760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.193 [2024-11-20 11:41:42.762820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:37.193 [2024-11-20 11:41:42.762838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.562 ms 00:27:37.193 [2024-11-20 11:41:42.762863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.193 [2024-11-20 11:41:42.762952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.193 [2024-11-20 11:41:42.762967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:37.193 [2024-11-20 11:41:42.762980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:27:37.193 [2024-11-20 11:41:42.762992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.193 [2024-11-20 11:41:42.764246] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 429.449 ms, result 0 00:27:38.564  [2024-11-20T11:41:45.258Z] Copying: 36/1024 [MB] (36 MBps) [2024-11-20T11:41:46.191Z] Copying: 71/1024 [MB] (35 MBps) [2024-11-20T11:41:47.126Z] Copying: 100/1024 [MB] (28 MBps) [2024-11-20T11:41:48.062Z] Copying: 129/1024 [MB] (28 MBps) [2024-11-20T11:41:49.436Z] Copying: 161/1024 [MB] (31 MBps) [2024-11-20T11:41:50.373Z] Copying: 191/1024 [MB] (30 MBps) [2024-11-20T11:41:51.308Z] Copying: 220/1024 [MB] (29 MBps) [2024-11-20T11:41:52.245Z] Copying: 250/1024 [MB] (29 MBps) [2024-11-20T11:41:53.185Z] Copying: 280/1024 [MB] (30 MBps) [2024-11-20T11:41:54.122Z] Copying: 311/1024 [MB] (30 MBps) [2024-11-20T11:41:55.059Z] Copying: 341/1024 [MB] (30 MBps) [2024-11-20T11:41:56.437Z] Copying: 371/1024 [MB] (30 MBps) [2024-11-20T11:41:57.373Z] Copying: 400/1024 [MB] (29 MBps) [2024-11-20T11:41:58.308Z] Copying: 430/1024 [MB] (30 MBps) [2024-11-20T11:41:59.243Z] Copying: 462/1024 [MB] (31 MBps) [2024-11-20T11:42:00.179Z] Copying: 493/1024 [MB] (30 MBps) [2024-11-20T11:42:01.116Z] Copying: 524/1024 [MB] (31 MBps) [2024-11-20T11:42:02.078Z] Copying: 555/1024 [MB] (31 MBps) [2024-11-20T11:42:03.453Z] Copying: 585/1024 [MB] (30 MBps) [2024-11-20T11:42:04.390Z] Copying: 616/1024 [MB] (30 MBps) [2024-11-20T11:42:05.326Z] Copying: 645/1024 [MB] (29 MBps) [2024-11-20T11:42:06.263Z] Copying: 675/1024 [MB] (29 MBps) [2024-11-20T11:42:07.199Z] Copying: 705/1024 [MB] (29 MBps) [2024-11-20T11:42:08.137Z] Copying: 734/1024 [MB] (29 MBps) [2024-11-20T11:42:09.132Z] Copying: 764/1024 [MB] (29 MBps) [2024-11-20T11:42:10.073Z] Copying: 794/1024 [MB] (29 MBps) [2024-11-20T11:42:11.451Z] Copying: 823/1024 [MB] (29 MBps) [2024-11-20T11:42:12.388Z] Copying: 852/1024 [MB] (29 MBps) [2024-11-20T11:42:13.323Z] Copying: 882/1024 [MB] (29 MBps) [2024-11-20T11:42:14.256Z] Copying: 912/1024 [MB] (29 MBps) [2024-11-20T11:42:15.193Z] Copying: 942/1024 [MB] (30 MBps) [2024-11-20T11:42:16.129Z] Copying: 973/1024 [MB] (30 MBps) [2024-11-20T11:42:16.697Z] Copying: 1003/1024 [MB] (30 MBps) [2024-11-20T11:42:16.956Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-20 11:42:16.828331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.194 [2024-11-20 11:42:16.828465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:11.194 [2024-11-20 11:42:16.828542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:11.194 [2024-11-20 11:42:16.828573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.194 [2024-11-20 11:42:16.828634] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:11.194 [2024-11-20 11:42:16.837901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.194 [2024-11-20 11:42:16.837947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:11.194 [2024-11-20 11:42:16.837971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.221 ms 00:28:11.194 [2024-11-20 11:42:16.837984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.194 [2024-11-20 11:42:16.838229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.194 [2024-11-20 11:42:16.838246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:11.194 [2024-11-20 11:42:16.838259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:28:11.194 [2024-11-20 11:42:16.838271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.194 [2024-11-20 11:42:16.840844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.194 [2024-11-20 11:42:16.841060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:11.194 [2024-11-20 11:42:16.841083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.556 ms 00:28:11.194 [2024-11-20 11:42:16.841105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.194 [2024-11-20 11:42:16.846018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.194 [2024-11-20 11:42:16.846058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:11.194 [2024-11-20 11:42:16.846074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.859 ms 00:28:11.194 [2024-11-20 11:42:16.846086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.194 [2024-11-20 11:42:16.884034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.194 [2024-11-20 11:42:16.884076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:11.194 [2024-11-20 11:42:16.884092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.890 ms 00:28:11.194 [2024-11-20 11:42:16.884103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.194 [2024-11-20 11:42:16.905283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.194 [2024-11-20 11:42:16.905325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:11.194 [2024-11-20 11:42:16.905341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.136 ms 00:28:11.194 [2024-11-20 11:42:16.905352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.194 [2024-11-20 11:42:16.905501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.194 [2024-11-20 11:42:16.905525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:11.194 [2024-11-20 11:42:16.905538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:28:11.194 [2024-11-20 11:42:16.905550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.194 [2024-11-20 11:42:16.940797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.194 [2024-11-20 11:42:16.940986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:11.194 [2024-11-20 11:42:16.941012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.227 ms 00:28:11.194 [2024-11-20 11:42:16.941024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.455 [2024-11-20 11:42:16.975227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.455 [2024-11-20 11:42:16.975284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:11.455 [2024-11-20 11:42:16.975299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.140 ms 00:28:11.455 [2024-11-20 11:42:16.975310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.455 [2024-11-20 11:42:17.009388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.455 [2024-11-20 11:42:17.009432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:11.455 [2024-11-20 11:42:17.009447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.037 ms 00:28:11.455 [2024-11-20 11:42:17.009458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.455 [2024-11-20 11:42:17.042943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.455 [2024-11-20 11:42:17.043105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:11.455 [2024-11-20 11:42:17.043127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.393 ms 00:28:11.455 [2024-11-20 11:42:17.043139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.455 [2024-11-20 11:42:17.043219] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:11.455 [2024-11-20 11:42:17.043240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.043991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.044003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.044015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.044027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.044039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.044050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.044062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.044073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.044085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.044097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.044108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.044120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:11.455 [2024-11-20 11:42:17.044132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:11.456 [2024-11-20 11:42:17.044454] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:11.456 [2024-11-20 11:42:17.044480] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 35c6d13a-9b5f-4be9-a9d4-969633558956 00:28:11.456 [2024-11-20 11:42:17.044493] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:11.456 [2024-11-20 11:42:17.044504] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:11.456 [2024-11-20 11:42:17.044515] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:11.456 [2024-11-20 11:42:17.044528] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:11.456 [2024-11-20 11:42:17.044539] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:11.456 [2024-11-20 11:42:17.044551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:11.456 [2024-11-20 11:42:17.044574] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:11.456 [2024-11-20 11:42:17.044584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:11.456 [2024-11-20 11:42:17.044594] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:11.456 [2024-11-20 11:42:17.044605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.456 [2024-11-20 11:42:17.044616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:11.456 [2024-11-20 11:42:17.044628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.387 ms 00:28:11.456 [2024-11-20 11:42:17.044640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.456 [2024-11-20 11:42:17.065526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.456 [2024-11-20 11:42:17.065673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:11.456 [2024-11-20 11:42:17.065755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.826 ms 00:28:11.456 [2024-11-20 11:42:17.065795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.456 [2024-11-20 11:42:17.066436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.456 [2024-11-20 11:42:17.066549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:11.456 [2024-11-20 11:42:17.066635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:28:11.456 [2024-11-20 11:42:17.066686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.456 [2024-11-20 11:42:17.121112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.456 [2024-11-20 11:42:17.121262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:11.456 [2024-11-20 11:42:17.121342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.456 [2024-11-20 11:42:17.121399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.456 [2024-11-20 11:42:17.121578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.456 [2024-11-20 11:42:17.121632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:11.456 [2024-11-20 11:42:17.121709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.456 [2024-11-20 11:42:17.121830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.456 [2024-11-20 11:42:17.121946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.456 [2024-11-20 11:42:17.122038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:11.456 [2024-11-20 11:42:17.122079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.456 [2024-11-20 11:42:17.122154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.456 [2024-11-20 11:42:17.122208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.456 [2024-11-20 11:42:17.122245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:11.456 [2024-11-20 11:42:17.122444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.456 [2024-11-20 11:42:17.122501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.716 [2024-11-20 11:42:17.261431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.716 [2024-11-20 11:42:17.261714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:11.716 [2024-11-20 11:42:17.261821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.716 [2024-11-20 11:42:17.261864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.716 [2024-11-20 11:42:17.365321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.716 [2024-11-20 11:42:17.365631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:11.716 [2024-11-20 11:42:17.365726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.716 [2024-11-20 11:42:17.365768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.716 [2024-11-20 11:42:17.365939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.716 [2024-11-20 11:42:17.366129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:11.716 [2024-11-20 11:42:17.366174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.716 [2024-11-20 11:42:17.366210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.716 [2024-11-20 11:42:17.366310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.716 [2024-11-20 11:42:17.366350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:11.716 [2024-11-20 11:42:17.366387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.716 [2024-11-20 11:42:17.366423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.716 [2024-11-20 11:42:17.366796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.716 [2024-11-20 11:42:17.366914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:11.716 [2024-11-20 11:42:17.366993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.716 [2024-11-20 11:42:17.367032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.716 [2024-11-20 11:42:17.367125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.716 [2024-11-20 11:42:17.367282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:11.716 [2024-11-20 11:42:17.367340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.716 [2024-11-20 11:42:17.367376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.716 [2024-11-20 11:42:17.367459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.716 [2024-11-20 11:42:17.367615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:11.716 [2024-11-20 11:42:17.367658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.716 [2024-11-20 11:42:17.367694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.716 [2024-11-20 11:42:17.367785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.716 [2024-11-20 11:42:17.367959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:11.716 [2024-11-20 11:42:17.367978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.716 [2024-11-20 11:42:17.367991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.716 [2024-11-20 11:42:17.368169] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 539.805 ms, result 0 00:28:13.094 00:28:13.094 00:28:13.094 11:42:18 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:14.999 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:14.999 11:42:20 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:28:14.999 [2024-11-20 11:42:20.450341] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:28:14.999 [2024-11-20 11:42:20.450548] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80766 ] 00:28:14.999 [2024-11-20 11:42:20.641813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.260 [2024-11-20 11:42:20.819715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.520 [2024-11-20 11:42:21.238048] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:15.520 [2024-11-20 11:42:21.238132] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:15.780 [2024-11-20 11:42:21.405050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.780 [2024-11-20 11:42:21.405137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:15.780 [2024-11-20 11:42:21.405161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:15.780 [2024-11-20 11:42:21.405173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.780 [2024-11-20 11:42:21.405227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.780 [2024-11-20 11:42:21.405241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:15.780 [2024-11-20 11:42:21.405256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:15.780 [2024-11-20 11:42:21.405267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.780 [2024-11-20 11:42:21.405290] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:15.781 [2024-11-20 11:42:21.406240] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:15.781 [2024-11-20 11:42:21.406270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.781 [2024-11-20 11:42:21.406282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:15.781 [2024-11-20 11:42:21.406293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:28:15.781 [2024-11-20 11:42:21.406304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.781 [2024-11-20 11:42:21.408877] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:15.781 [2024-11-20 11:42:21.428400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.781 [2024-11-20 11:42:21.428680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:15.781 [2024-11-20 11:42:21.428705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.524 ms 00:28:15.781 [2024-11-20 11:42:21.428717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.781 [2024-11-20 11:42:21.428794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.781 [2024-11-20 11:42:21.428807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:15.781 [2024-11-20 11:42:21.428819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:28:15.781 [2024-11-20 11:42:21.428830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.781 [2024-11-20 11:42:21.442006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.781 [2024-11-20 11:42:21.442035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:15.781 [2024-11-20 11:42:21.442049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.100 ms 00:28:15.781 [2024-11-20 11:42:21.442060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.781 [2024-11-20 11:42:21.442154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.781 [2024-11-20 11:42:21.442167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:15.781 [2024-11-20 11:42:21.442179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:28:15.781 [2024-11-20 11:42:21.442190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.781 [2024-11-20 11:42:21.442248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.781 [2024-11-20 11:42:21.442261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:15.781 [2024-11-20 11:42:21.442272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:15.781 [2024-11-20 11:42:21.442282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.781 [2024-11-20 11:42:21.442312] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:15.781 [2024-11-20 11:42:21.448092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.781 [2024-11-20 11:42:21.448319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:15.781 [2024-11-20 11:42:21.448340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.790 ms 00:28:15.781 [2024-11-20 11:42:21.448358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.781 [2024-11-20 11:42:21.448395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.781 [2024-11-20 11:42:21.448407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:15.781 [2024-11-20 11:42:21.448419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:15.781 [2024-11-20 11:42:21.448429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.781 [2024-11-20 11:42:21.448486] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:15.781 [2024-11-20 11:42:21.448514] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:15.781 [2024-11-20 11:42:21.448554] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:15.781 [2024-11-20 11:42:21.448579] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:15.781 [2024-11-20 11:42:21.448674] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:15.781 [2024-11-20 11:42:21.448689] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:15.781 [2024-11-20 11:42:21.448702] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:15.781 [2024-11-20 11:42:21.448717] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:15.781 [2024-11-20 11:42:21.448730] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:15.781 [2024-11-20 11:42:21.448743] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:15.781 [2024-11-20 11:42:21.448754] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:15.781 [2024-11-20 11:42:21.448765] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:15.781 [2024-11-20 11:42:21.448775] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:15.781 [2024-11-20 11:42:21.448791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.781 [2024-11-20 11:42:21.448802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:15.781 [2024-11-20 11:42:21.448813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:28:15.781 [2024-11-20 11:42:21.448824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.781 [2024-11-20 11:42:21.448899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.781 [2024-11-20 11:42:21.448910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:15.781 [2024-11-20 11:42:21.448922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:15.781 [2024-11-20 11:42:21.448931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.781 [2024-11-20 11:42:21.449031] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:15.781 [2024-11-20 11:42:21.449051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:15.781 [2024-11-20 11:42:21.449062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:15.781 [2024-11-20 11:42:21.449073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.781 [2024-11-20 11:42:21.449084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:15.781 [2024-11-20 11:42:21.449103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:15.781 [2024-11-20 11:42:21.449114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:15.781 [2024-11-20 11:42:21.449126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:15.781 [2024-11-20 11:42:21.449138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:15.781 [2024-11-20 11:42:21.449148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:15.781 [2024-11-20 11:42:21.449158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:15.781 [2024-11-20 11:42:21.449168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:15.781 [2024-11-20 11:42:21.449178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:15.781 [2024-11-20 11:42:21.449188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:15.781 [2024-11-20 11:42:21.449198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:15.781 [2024-11-20 11:42:21.449217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.781 [2024-11-20 11:42:21.449227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:15.781 [2024-11-20 11:42:21.449236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:15.781 [2024-11-20 11:42:21.449245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.781 [2024-11-20 11:42:21.449255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:15.781 [2024-11-20 11:42:21.449265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:15.781 [2024-11-20 11:42:21.449275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:15.781 [2024-11-20 11:42:21.449285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:15.781 [2024-11-20 11:42:21.449295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:15.781 [2024-11-20 11:42:21.449305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:15.781 [2024-11-20 11:42:21.449315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:15.781 [2024-11-20 11:42:21.449324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:15.781 [2024-11-20 11:42:21.449334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:15.781 [2024-11-20 11:42:21.449344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:15.781 [2024-11-20 11:42:21.449353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:15.781 [2024-11-20 11:42:21.449363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:15.781 [2024-11-20 11:42:21.449372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:15.781 [2024-11-20 11:42:21.449381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:15.781 [2024-11-20 11:42:21.449391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:15.781 [2024-11-20 11:42:21.449400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:15.781 [2024-11-20 11:42:21.449409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:15.781 [2024-11-20 11:42:21.449418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:15.781 [2024-11-20 11:42:21.449427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:15.781 [2024-11-20 11:42:21.449436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:15.781 [2024-11-20 11:42:21.449448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.781 [2024-11-20 11:42:21.449457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:15.781 [2024-11-20 11:42:21.449466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:15.781 [2024-11-20 11:42:21.449488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.781 [2024-11-20 11:42:21.449499] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:15.781 [2024-11-20 11:42:21.449510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:15.781 [2024-11-20 11:42:21.449521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:15.781 [2024-11-20 11:42:21.449531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:15.781 [2024-11-20 11:42:21.449542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:15.781 [2024-11-20 11:42:21.449552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:15.782 [2024-11-20 11:42:21.449562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:15.782 [2024-11-20 11:42:21.449572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:15.782 [2024-11-20 11:42:21.449581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:15.782 [2024-11-20 11:42:21.449591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:15.782 [2024-11-20 11:42:21.449601] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:15.782 [2024-11-20 11:42:21.449614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:15.782 [2024-11-20 11:42:21.449625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:15.782 [2024-11-20 11:42:21.449636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:15.782 [2024-11-20 11:42:21.449647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:15.782 [2024-11-20 11:42:21.449657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:15.782 [2024-11-20 11:42:21.449669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:15.782 [2024-11-20 11:42:21.449680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:15.782 [2024-11-20 11:42:21.449691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:15.782 [2024-11-20 11:42:21.449702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:15.782 [2024-11-20 11:42:21.449713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:15.782 [2024-11-20 11:42:21.449724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:15.782 [2024-11-20 11:42:21.449734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:15.782 [2024-11-20 11:42:21.449744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:15.782 [2024-11-20 11:42:21.449754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:15.782 [2024-11-20 11:42:21.449775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:15.782 [2024-11-20 11:42:21.449785] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:15.782 [2024-11-20 11:42:21.449800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:15.782 [2024-11-20 11:42:21.449813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:15.782 [2024-11-20 11:42:21.449824] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:15.782 [2024-11-20 11:42:21.449835] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:15.782 [2024-11-20 11:42:21.449845] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:15.782 [2024-11-20 11:42:21.449857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.782 [2024-11-20 11:42:21.449867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:15.782 [2024-11-20 11:42:21.449877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.880 ms 00:28:15.782 [2024-11-20 11:42:21.449887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.782 [2024-11-20 11:42:21.500802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.782 [2024-11-20 11:42:21.500841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:15.782 [2024-11-20 11:42:21.500855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.862 ms 00:28:15.782 [2024-11-20 11:42:21.500867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.782 [2024-11-20 11:42:21.500955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.782 [2024-11-20 11:42:21.500968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:15.782 [2024-11-20 11:42:21.500978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:15.782 [2024-11-20 11:42:21.500989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.042 [2024-11-20 11:42:21.567862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.042 [2024-11-20 11:42:21.568088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:16.042 [2024-11-20 11:42:21.568113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.779 ms 00:28:16.042 [2024-11-20 11:42:21.568125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.042 [2024-11-20 11:42:21.568173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.042 [2024-11-20 11:42:21.568185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:16.042 [2024-11-20 11:42:21.568204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:16.042 [2024-11-20 11:42:21.568215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.042 [2024-11-20 11:42:21.569092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.042 [2024-11-20 11:42:21.569123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:16.042 [2024-11-20 11:42:21.569135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:28:16.042 [2024-11-20 11:42:21.569146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.042 [2024-11-20 11:42:21.569297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.042 [2024-11-20 11:42:21.569311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:16.042 [2024-11-20 11:42:21.569322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:28:16.042 [2024-11-20 11:42:21.569340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.042 [2024-11-20 11:42:21.593704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.042 [2024-11-20 11:42:21.593741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:16.042 [2024-11-20 11:42:21.593761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.341 ms 00:28:16.042 [2024-11-20 11:42:21.593772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.042 [2024-11-20 11:42:21.614964] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:16.042 [2024-11-20 11:42:21.615004] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:16.042 [2024-11-20 11:42:21.615020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.042 [2024-11-20 11:42:21.615033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:16.042 [2024-11-20 11:42:21.615045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.122 ms 00:28:16.042 [2024-11-20 11:42:21.615056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.042 [2024-11-20 11:42:21.645625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.042 [2024-11-20 11:42:21.645822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:16.042 [2024-11-20 11:42:21.645844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.523 ms 00:28:16.042 [2024-11-20 11:42:21.645857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.042 [2024-11-20 11:42:21.663923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.042 [2024-11-20 11:42:21.663975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:16.042 [2024-11-20 11:42:21.663989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.024 ms 00:28:16.042 [2024-11-20 11:42:21.664000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.042 [2024-11-20 11:42:21.682067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.042 [2024-11-20 11:42:21.682103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:16.042 [2024-11-20 11:42:21.682117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.029 ms 00:28:16.042 [2024-11-20 11:42:21.682128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.042 [2024-11-20 11:42:21.682992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.042 [2024-11-20 11:42:21.683022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:16.042 [2024-11-20 11:42:21.683035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:28:16.042 [2024-11-20 11:42:21.683051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.042 [2024-11-20 11:42:21.780484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.042 [2024-11-20 11:42:21.780570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:16.042 [2024-11-20 11:42:21.780595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.408 ms 00:28:16.042 [2024-11-20 11:42:21.780607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.042 [2024-11-20 11:42:21.791823] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:16.042 [2024-11-20 11:42:21.796214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.042 [2024-11-20 11:42:21.796245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:16.042 [2024-11-20 11:42:21.796260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.545 ms 00:28:16.042 [2024-11-20 11:42:21.796271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.042 [2024-11-20 11:42:21.796381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.042 [2024-11-20 11:42:21.796395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:16.042 [2024-11-20 11:42:21.796407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:16.042 [2024-11-20 11:42:21.796422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.042 [2024-11-20 11:42:21.796523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.042 [2024-11-20 11:42:21.796537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:16.042 [2024-11-20 11:42:21.796548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:28:16.042 [2024-11-20 11:42:21.796558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.042 [2024-11-20 11:42:21.796582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.042 [2024-11-20 11:42:21.796594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:16.042 [2024-11-20 11:42:21.796606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:16.042 [2024-11-20 11:42:21.796617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.042 [2024-11-20 11:42:21.796657] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:16.042 [2024-11-20 11:42:21.796674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.042 [2024-11-20 11:42:21.796685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:16.042 [2024-11-20 11:42:21.796696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:16.042 [2024-11-20 11:42:21.796706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.321 [2024-11-20 11:42:21.834002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.321 [2024-11-20 11:42:21.834263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:16.321 [2024-11-20 11:42:21.834289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.274 ms 00:28:16.321 [2024-11-20 11:42:21.834312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.321 [2024-11-20 11:42:21.834446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.321 [2024-11-20 11:42:21.834461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:16.321 [2024-11-20 11:42:21.834487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:28:16.321 [2024-11-20 11:42:21.834499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.321 [2024-11-20 11:42:21.836098] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 430.482 ms, result 0 00:28:17.325  [2024-11-20T11:42:24.024Z] Copying: 32/1024 [MB] (32 MBps) [2024-11-20T11:42:24.961Z] Copying: 64/1024 [MB] (31 MBps) [2024-11-20T11:42:25.897Z] Copying: 96/1024 [MB] (32 MBps) [2024-11-20T11:42:27.275Z] Copying: 127/1024 [MB] (31 MBps) [2024-11-20T11:42:28.213Z] Copying: 158/1024 [MB] (30 MBps) [2024-11-20T11:42:29.149Z] Copying: 189/1024 [MB] (31 MBps) [2024-11-20T11:42:30.084Z] Copying: 218/1024 [MB] (29 MBps) [2024-11-20T11:42:31.019Z] Copying: 249/1024 [MB] (30 MBps) [2024-11-20T11:42:31.956Z] Copying: 279/1024 [MB] (29 MBps) [2024-11-20T11:42:32.893Z] Copying: 309/1024 [MB] (29 MBps) [2024-11-20T11:42:34.272Z] Copying: 338/1024 [MB] (29 MBps) [2024-11-20T11:42:35.234Z] Copying: 368/1024 [MB] (29 MBps) [2024-11-20T11:42:36.171Z] Copying: 398/1024 [MB] (29 MBps) [2024-11-20T11:42:37.108Z] Copying: 427/1024 [MB] (29 MBps) [2024-11-20T11:42:38.045Z] Copying: 457/1024 [MB] (29 MBps) [2024-11-20T11:42:38.982Z] Copying: 487/1024 [MB] (29 MBps) [2024-11-20T11:42:39.920Z] Copying: 516/1024 [MB] (29 MBps) [2024-11-20T11:42:40.856Z] Copying: 547/1024 [MB] (30 MBps) [2024-11-20T11:42:42.233Z] Copying: 576/1024 [MB] (29 MBps) [2024-11-20T11:42:43.171Z] Copying: 607/1024 [MB] (30 MBps) [2024-11-20T11:42:44.108Z] Copying: 637/1024 [MB] (30 MBps) [2024-11-20T11:42:45.045Z] Copying: 668/1024 [MB] (30 MBps) [2024-11-20T11:42:45.980Z] Copying: 699/1024 [MB] (31 MBps) [2024-11-20T11:42:46.916Z] Copying: 729/1024 [MB] (30 MBps) [2024-11-20T11:42:47.855Z] Copying: 759/1024 [MB] (30 MBps) [2024-11-20T11:42:49.233Z] Copying: 789/1024 [MB] (29 MBps) [2024-11-20T11:42:50.164Z] Copying: 816/1024 [MB] (27 MBps) [2024-11-20T11:42:51.095Z] Copying: 848/1024 [MB] (31 MBps) [2024-11-20T11:42:52.031Z] Copying: 878/1024 [MB] (30 MBps) [2024-11-20T11:42:52.964Z] Copying: 908/1024 [MB] (29 MBps) [2024-11-20T11:42:53.897Z] Copying: 937/1024 [MB] (29 MBps) [2024-11-20T11:42:55.273Z] Copying: 967/1024 [MB] (29 MBps) [2024-11-20T11:42:56.208Z] Copying: 997/1024 [MB] (29 MBps) [2024-11-20T11:42:56.775Z] Copying: 1023/1024 [MB] (25 MBps) [2024-11-20T11:42:56.775Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-20 11:42:56.623207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.013 [2024-11-20 11:42:56.623290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:51.013 [2024-11-20 11:42:56.623321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:51.013 [2024-11-20 11:42:56.623361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.013 [2024-11-20 11:42:56.625158] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:51.013 [2024-11-20 11:42:56.632407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.013 [2024-11-20 11:42:56.632441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:51.013 [2024-11-20 11:42:56.632456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.207 ms 00:28:51.013 [2024-11-20 11:42:56.632466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.013 [2024-11-20 11:42:56.643710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.013 [2024-11-20 11:42:56.643750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:51.013 [2024-11-20 11:42:56.643765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.291 ms 00:28:51.013 [2024-11-20 11:42:56.643776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.013 [2024-11-20 11:42:56.665097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.013 [2024-11-20 11:42:56.665156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:51.013 [2024-11-20 11:42:56.665172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.295 ms 00:28:51.013 [2024-11-20 11:42:56.665186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.013 [2024-11-20 11:42:56.670434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.013 [2024-11-20 11:42:56.670465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:51.013 [2024-11-20 11:42:56.670488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.210 ms 00:28:51.013 [2024-11-20 11:42:56.670499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.013 [2024-11-20 11:42:56.708149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.013 [2024-11-20 11:42:56.708308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:51.013 [2024-11-20 11:42:56.708328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.582 ms 00:28:51.013 [2024-11-20 11:42:56.708339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.013 [2024-11-20 11:42:56.729644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.013 [2024-11-20 11:42:56.729687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:51.013 [2024-11-20 11:42:56.729701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.268 ms 00:28:51.013 [2024-11-20 11:42:56.729712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.273 [2024-11-20 11:42:56.830036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.273 [2024-11-20 11:42:56.830211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:51.273 [2024-11-20 11:42:56.830237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.281 ms 00:28:51.273 [2024-11-20 11:42:56.830250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.273 [2024-11-20 11:42:56.867995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.273 [2024-11-20 11:42:56.868033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:51.273 [2024-11-20 11:42:56.868048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.721 ms 00:28:51.273 [2024-11-20 11:42:56.868058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.273 [2024-11-20 11:42:56.906017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.273 [2024-11-20 11:42:56.906088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:51.273 [2024-11-20 11:42:56.906104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.919 ms 00:28:51.273 [2024-11-20 11:42:56.906115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.273 [2024-11-20 11:42:56.944506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.273 [2024-11-20 11:42:56.944553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:51.273 [2024-11-20 11:42:56.944569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.346 ms 00:28:51.273 [2024-11-20 11:42:56.944579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.273 [2024-11-20 11:42:56.981330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.273 [2024-11-20 11:42:56.981367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:51.273 [2024-11-20 11:42:56.981381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.645 ms 00:28:51.273 [2024-11-20 11:42:56.981391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.273 [2024-11-20 11:42:56.981428] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:51.273 [2024-11-20 11:42:56.981446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 120064 / 261120 wr_cnt: 1 state: open 00:28:51.273 [2024-11-20 11:42:56.981460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:51.273 [2024-11-20 11:42:56.981484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:51.273 [2024-11-20 11:42:56.981496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:51.273 [2024-11-20 11:42:56.981508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:51.273 [2024-11-20 11:42:56.981520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:51.273 [2024-11-20 11:42:56.981531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:51.273 [2024-11-20 11:42:56.981542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:51.273 [2024-11-20 11:42:56.981553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:51.273 [2024-11-20 11:42:56.981563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:51.273 [2024-11-20 11:42:56.981575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:51.273 [2024-11-20 11:42:56.981586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:51.273 [2024-11-20 11:42:56.981597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:51.273 [2024-11-20 11:42:56.981608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:51.273 [2024-11-20 11:42:56.981619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:51.273 [2024-11-20 11:42:56.981630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:51.273 [2024-11-20 11:42:56.981641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:51.273 [2024-11-20 11:42:56.981652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:51.273 [2024-11-20 11:42:56.981662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:51.273 [2024-11-20 11:42:56.981673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:51.273 [2024-11-20 11:42:56.981684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.981993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:51.274 [2024-11-20 11:42:56.982551] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:51.274 [2024-11-20 11:42:56.982561] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 35c6d13a-9b5f-4be9-a9d4-969633558956 00:28:51.274 [2024-11-20 11:42:56.982573] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 120064 00:28:51.274 [2024-11-20 11:42:56.982583] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 121024 00:28:51.274 [2024-11-20 11:42:56.982593] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 120064 00:28:51.274 [2024-11-20 11:42:56.982603] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0080 00:28:51.274 [2024-11-20 11:42:56.982613] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:51.274 [2024-11-20 11:42:56.982629] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:51.274 [2024-11-20 11:42:56.982650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:51.274 [2024-11-20 11:42:56.982659] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:51.274 [2024-11-20 11:42:56.982668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:51.274 [2024-11-20 11:42:56.982678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.274 [2024-11-20 11:42:56.982689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:51.275 [2024-11-20 11:42:56.982699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:28:51.275 [2024-11-20 11:42:56.982709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.275 [2024-11-20 11:42:57.003112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.275 [2024-11-20 11:42:57.003146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:51.275 [2024-11-20 11:42:57.003159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.366 ms 00:28:51.275 [2024-11-20 11:42:57.003175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.275 [2024-11-20 11:42:57.003761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.275 [2024-11-20 11:42:57.003779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:51.275 [2024-11-20 11:42:57.003790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:28:51.275 [2024-11-20 11:42:57.003800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.534 [2024-11-20 11:42:57.055816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.534 [2024-11-20 11:42:57.055864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:51.534 [2024-11-20 11:42:57.055885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.534 [2024-11-20 11:42:57.055896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.534 [2024-11-20 11:42:57.055972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.534 [2024-11-20 11:42:57.055983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:51.534 [2024-11-20 11:42:57.055993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.534 [2024-11-20 11:42:57.056003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.534 [2024-11-20 11:42:57.056093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.534 [2024-11-20 11:42:57.056107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:51.534 [2024-11-20 11:42:57.056118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.534 [2024-11-20 11:42:57.056133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.534 [2024-11-20 11:42:57.056150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.534 [2024-11-20 11:42:57.056161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:51.534 [2024-11-20 11:42:57.056171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.534 [2024-11-20 11:42:57.056181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.534 [2024-11-20 11:42:57.185657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.534 [2024-11-20 11:42:57.185722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:51.534 [2024-11-20 11:42:57.185746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.534 [2024-11-20 11:42:57.185757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.534 [2024-11-20 11:42:57.291313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.534 [2024-11-20 11:42:57.291372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:51.534 [2024-11-20 11:42:57.291389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.534 [2024-11-20 11:42:57.291401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.534 [2024-11-20 11:42:57.291510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.534 [2024-11-20 11:42:57.291524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:51.534 [2024-11-20 11:42:57.291536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.534 [2024-11-20 11:42:57.291547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.534 [2024-11-20 11:42:57.291599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.534 [2024-11-20 11:42:57.291611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:51.534 [2024-11-20 11:42:57.291622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.534 [2024-11-20 11:42:57.291632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.534 [2024-11-20 11:42:57.291746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.534 [2024-11-20 11:42:57.291765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:51.534 [2024-11-20 11:42:57.291780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.534 [2024-11-20 11:42:57.291794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.534 [2024-11-20 11:42:57.291852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.534 [2024-11-20 11:42:57.291869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:51.534 [2024-11-20 11:42:57.291885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.534 [2024-11-20 11:42:57.291899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.534 [2024-11-20 11:42:57.291946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.534 [2024-11-20 11:42:57.291958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:51.534 [2024-11-20 11:42:57.291968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.534 [2024-11-20 11:42:57.291978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.534 [2024-11-20 11:42:57.292026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.534 [2024-11-20 11:42:57.292044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:51.534 [2024-11-20 11:42:57.292055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.534 [2024-11-20 11:42:57.292065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.534 [2024-11-20 11:42:57.292184] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 670.478 ms, result 0 00:28:53.458 00:28:53.458 00:28:53.458 11:42:58 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:28:53.458 [2024-11-20 11:42:58.884007] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:28:53.458 [2024-11-20 11:42:58.884183] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81157 ] 00:28:53.458 [2024-11-20 11:42:59.076228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.458 [2024-11-20 11:42:59.190226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.028 [2024-11-20 11:42:59.556304] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:54.028 [2024-11-20 11:42:59.556371] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:54.028 [2024-11-20 11:42:59.718068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.028 [2024-11-20 11:42:59.718283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:54.028 [2024-11-20 11:42:59.718318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:54.028 [2024-11-20 11:42:59.718330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.028 [2024-11-20 11:42:59.718398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.028 [2024-11-20 11:42:59.718413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:54.028 [2024-11-20 11:42:59.718429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:54.028 [2024-11-20 11:42:59.718440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.028 [2024-11-20 11:42:59.718465] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:54.028 [2024-11-20 11:42:59.719581] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:54.028 [2024-11-20 11:42:59.719603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.028 [2024-11-20 11:42:59.719614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:54.028 [2024-11-20 11:42:59.719625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.143 ms 00:28:54.028 [2024-11-20 11:42:59.719635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.028 [2024-11-20 11:42:59.721058] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:54.028 [2024-11-20 11:42:59.740917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.028 [2024-11-20 11:42:59.740955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:54.028 [2024-11-20 11:42:59.740971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.859 ms 00:28:54.028 [2024-11-20 11:42:59.740982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.028 [2024-11-20 11:42:59.741052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.028 [2024-11-20 11:42:59.741065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:54.028 [2024-11-20 11:42:59.741076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:54.028 [2024-11-20 11:42:59.741094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.028 [2024-11-20 11:42:59.747960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.028 [2024-11-20 11:42:59.748117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:54.028 [2024-11-20 11:42:59.748140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.792 ms 00:28:54.028 [2024-11-20 11:42:59.748151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.028 [2024-11-20 11:42:59.748241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.028 [2024-11-20 11:42:59.748254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:54.028 [2024-11-20 11:42:59.748265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:28:54.028 [2024-11-20 11:42:59.748276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.028 [2024-11-20 11:42:59.748322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.028 [2024-11-20 11:42:59.748334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:54.028 [2024-11-20 11:42:59.748354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:54.028 [2024-11-20 11:42:59.748365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.028 [2024-11-20 11:42:59.748391] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:54.028 [2024-11-20 11:42:59.753252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.028 [2024-11-20 11:42:59.753282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:54.028 [2024-11-20 11:42:59.753294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.868 ms 00:28:54.028 [2024-11-20 11:42:59.753308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.028 [2024-11-20 11:42:59.753338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.028 [2024-11-20 11:42:59.753349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:54.029 [2024-11-20 11:42:59.753360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:54.029 [2024-11-20 11:42:59.753369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.029 [2024-11-20 11:42:59.753427] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:54.029 [2024-11-20 11:42:59.753451] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:54.029 [2024-11-20 11:42:59.753503] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:54.029 [2024-11-20 11:42:59.753526] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:54.029 [2024-11-20 11:42:59.753617] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:54.029 [2024-11-20 11:42:59.753631] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:54.029 [2024-11-20 11:42:59.753644] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:54.029 [2024-11-20 11:42:59.753658] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:54.029 [2024-11-20 11:42:59.753671] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:54.029 [2024-11-20 11:42:59.753682] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:54.029 [2024-11-20 11:42:59.753692] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:54.029 [2024-11-20 11:42:59.753702] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:54.029 [2024-11-20 11:42:59.753712] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:54.029 [2024-11-20 11:42:59.753726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.029 [2024-11-20 11:42:59.753736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:54.029 [2024-11-20 11:42:59.753747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:28:54.029 [2024-11-20 11:42:59.753757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.029 [2024-11-20 11:42:59.753830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.029 [2024-11-20 11:42:59.753840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:54.029 [2024-11-20 11:42:59.753851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:54.029 [2024-11-20 11:42:59.753860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.029 [2024-11-20 11:42:59.753980] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:54.029 [2024-11-20 11:42:59.754000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:54.029 [2024-11-20 11:42:59.754012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:54.029 [2024-11-20 11:42:59.754023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:54.029 [2024-11-20 11:42:59.754035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:54.029 [2024-11-20 11:42:59.754045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:54.029 [2024-11-20 11:42:59.754056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:54.029 [2024-11-20 11:42:59.754066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:54.029 [2024-11-20 11:42:59.754078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:54.029 [2024-11-20 11:42:59.754088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:54.029 [2024-11-20 11:42:59.754098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:54.029 [2024-11-20 11:42:59.754108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:54.029 [2024-11-20 11:42:59.754118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:54.029 [2024-11-20 11:42:59.754128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:54.029 [2024-11-20 11:42:59.754139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:54.029 [2024-11-20 11:42:59.754158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:54.029 [2024-11-20 11:42:59.754169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:54.029 [2024-11-20 11:42:59.754179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:54.029 [2024-11-20 11:42:59.754189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:54.029 [2024-11-20 11:42:59.754199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:54.029 [2024-11-20 11:42:59.754210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:54.029 [2024-11-20 11:42:59.754220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:54.029 [2024-11-20 11:42:59.754230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:54.029 [2024-11-20 11:42:59.754240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:54.029 [2024-11-20 11:42:59.754250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:54.029 [2024-11-20 11:42:59.754260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:54.029 [2024-11-20 11:42:59.754270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:54.029 [2024-11-20 11:42:59.754280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:54.029 [2024-11-20 11:42:59.754290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:54.029 [2024-11-20 11:42:59.754300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:54.029 [2024-11-20 11:42:59.754310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:54.029 [2024-11-20 11:42:59.754319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:54.029 [2024-11-20 11:42:59.754330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:54.029 [2024-11-20 11:42:59.754339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:54.029 [2024-11-20 11:42:59.754349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:54.029 [2024-11-20 11:42:59.754371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:54.029 [2024-11-20 11:42:59.754381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:54.029 [2024-11-20 11:42:59.754389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:54.029 [2024-11-20 11:42:59.754399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:54.029 [2024-11-20 11:42:59.754408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:54.029 [2024-11-20 11:42:59.754418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:54.029 [2024-11-20 11:42:59.754427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:54.029 [2024-11-20 11:42:59.754436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:54.029 [2024-11-20 11:42:59.754445] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:54.029 [2024-11-20 11:42:59.754455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:54.029 [2024-11-20 11:42:59.754464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:54.029 [2024-11-20 11:42:59.754474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:54.029 [2024-11-20 11:42:59.754485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:54.029 [2024-11-20 11:42:59.754495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:54.029 [2024-11-20 11:42:59.754517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:54.029 [2024-11-20 11:42:59.754526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:54.029 [2024-11-20 11:42:59.754535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:54.029 [2024-11-20 11:42:59.754545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:54.029 [2024-11-20 11:42:59.754556] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:54.029 [2024-11-20 11:42:59.754568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:54.029 [2024-11-20 11:42:59.754580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:54.030 [2024-11-20 11:42:59.754591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:54.030 [2024-11-20 11:42:59.754601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:54.030 [2024-11-20 11:42:59.754612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:54.030 [2024-11-20 11:42:59.754622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:54.030 [2024-11-20 11:42:59.754632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:54.030 [2024-11-20 11:42:59.754642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:54.030 [2024-11-20 11:42:59.754653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:54.030 [2024-11-20 11:42:59.754663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:54.030 [2024-11-20 11:42:59.754673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:54.030 [2024-11-20 11:42:59.754683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:54.030 [2024-11-20 11:42:59.754693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:54.030 [2024-11-20 11:42:59.754703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:54.030 [2024-11-20 11:42:59.754714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:54.030 [2024-11-20 11:42:59.754724] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:54.030 [2024-11-20 11:42:59.754739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:54.030 [2024-11-20 11:42:59.754750] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:54.030 [2024-11-20 11:42:59.754761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:54.030 [2024-11-20 11:42:59.754772] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:54.030 [2024-11-20 11:42:59.754782] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:54.030 [2024-11-20 11:42:59.754793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.030 [2024-11-20 11:42:59.754804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:54.030 [2024-11-20 11:42:59.754814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:28:54.030 [2024-11-20 11:42:59.754823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.301 [2024-11-20 11:42:59.796536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.301 [2024-11-20 11:42:59.796578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:54.301 [2024-11-20 11:42:59.796593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.662 ms 00:28:54.301 [2024-11-20 11:42:59.796605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.301 [2024-11-20 11:42:59.796702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.301 [2024-11-20 11:42:59.796713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:54.301 [2024-11-20 11:42:59.796724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:54.301 [2024-11-20 11:42:59.796734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.301 [2024-11-20 11:42:59.859422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.301 [2024-11-20 11:42:59.859464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:54.301 [2024-11-20 11:42:59.859492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.613 ms 00:28:54.301 [2024-11-20 11:42:59.859503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.301 [2024-11-20 11:42:59.859552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.301 [2024-11-20 11:42:59.859563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:54.301 [2024-11-20 11:42:59.859574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:54.301 [2024-11-20 11:42:59.859588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.301 [2024-11-20 11:42:59.860068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.301 [2024-11-20 11:42:59.860089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:54.301 [2024-11-20 11:42:59.860100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:28:54.301 [2024-11-20 11:42:59.860110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.301 [2024-11-20 11:42:59.860228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.301 [2024-11-20 11:42:59.860242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:54.301 [2024-11-20 11:42:59.860252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:28:54.301 [2024-11-20 11:42:59.860268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.301 [2024-11-20 11:42:59.880306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.301 [2024-11-20 11:42:59.880467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:54.301 [2024-11-20 11:42:59.880503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.016 ms 00:28:54.301 [2024-11-20 11:42:59.880514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.301 [2024-11-20 11:42:59.900261] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:54.301 [2024-11-20 11:42:59.900298] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:54.301 [2024-11-20 11:42:59.900313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.301 [2024-11-20 11:42:59.900340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:54.301 [2024-11-20 11:42:59.900351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.678 ms 00:28:54.301 [2024-11-20 11:42:59.900362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.301 [2024-11-20 11:42:59.930666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.301 [2024-11-20 11:42:59.930828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:54.301 [2024-11-20 11:42:59.930850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.263 ms 00:28:54.301 [2024-11-20 11:42:59.930860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.301 [2024-11-20 11:42:59.949480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.301 [2024-11-20 11:42:59.949532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:54.301 [2024-11-20 11:42:59.949545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.561 ms 00:28:54.301 [2024-11-20 11:42:59.949555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.301 [2024-11-20 11:42:59.967885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.301 [2024-11-20 11:42:59.967919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:54.301 [2024-11-20 11:42:59.967932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.292 ms 00:28:54.301 [2024-11-20 11:42:59.967942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.301 [2024-11-20 11:42:59.968776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.301 [2024-11-20 11:42:59.968799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:54.301 [2024-11-20 11:42:59.968811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:28:54.301 [2024-11-20 11:42:59.968825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.301 [2024-11-20 11:43:00.057105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.568 [2024-11-20 11:43:00.057425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:54.568 [2024-11-20 11:43:00.057466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.256 ms 00:28:54.568 [2024-11-20 11:43:00.057495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.568 [2024-11-20 11:43:00.070785] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:54.568 [2024-11-20 11:43:00.074025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.568 [2024-11-20 11:43:00.074058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:54.568 [2024-11-20 11:43:00.074073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.467 ms 00:28:54.568 [2024-11-20 11:43:00.074101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.568 [2024-11-20 11:43:00.074213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.568 [2024-11-20 11:43:00.074240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:54.568 [2024-11-20 11:43:00.074251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:54.568 [2024-11-20 11:43:00.074265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.568 [2024-11-20 11:43:00.075884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.568 [2024-11-20 11:43:00.076031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:54.568 [2024-11-20 11:43:00.076052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.558 ms 00:28:54.568 [2024-11-20 11:43:00.076062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.568 [2024-11-20 11:43:00.076104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.568 [2024-11-20 11:43:00.076117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:54.568 [2024-11-20 11:43:00.076128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:54.568 [2024-11-20 11:43:00.076138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.568 [2024-11-20 11:43:00.076174] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:54.568 [2024-11-20 11:43:00.076189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.568 [2024-11-20 11:43:00.076200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:54.568 [2024-11-20 11:43:00.076210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:54.568 [2024-11-20 11:43:00.076220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.568 [2024-11-20 11:43:00.114549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.568 [2024-11-20 11:43:00.114690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:54.568 [2024-11-20 11:43:00.114712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.309 ms 00:28:54.568 [2024-11-20 11:43:00.114729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.568 [2024-11-20 11:43:00.114805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.568 [2024-11-20 11:43:00.114818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:54.568 [2024-11-20 11:43:00.114829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:54.568 [2024-11-20 11:43:00.114840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.568 [2024-11-20 11:43:00.115955] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 397.378 ms, result 0 00:28:55.944  [2024-11-20T11:43:02.643Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-20T11:43:03.577Z] Copying: 58/1024 [MB] (31 MBps) [2024-11-20T11:43:04.519Z] Copying: 88/1024 [MB] (29 MBps) [2024-11-20T11:43:05.453Z] Copying: 118/1024 [MB] (30 MBps) [2024-11-20T11:43:06.389Z] Copying: 149/1024 [MB] (30 MBps) [2024-11-20T11:43:07.765Z] Copying: 180/1024 [MB] (30 MBps) [2024-11-20T11:43:08.699Z] Copying: 210/1024 [MB] (30 MBps) [2024-11-20T11:43:09.636Z] Copying: 241/1024 [MB] (30 MBps) [2024-11-20T11:43:10.572Z] Copying: 272/1024 [MB] (31 MBps) [2024-11-20T11:43:11.526Z] Copying: 304/1024 [MB] (31 MBps) [2024-11-20T11:43:12.508Z] Copying: 334/1024 [MB] (30 MBps) [2024-11-20T11:43:13.444Z] Copying: 365/1024 [MB] (30 MBps) [2024-11-20T11:43:14.381Z] Copying: 397/1024 [MB] (31 MBps) [2024-11-20T11:43:15.759Z] Copying: 428/1024 [MB] (31 MBps) [2024-11-20T11:43:16.698Z] Copying: 460/1024 [MB] (31 MBps) [2024-11-20T11:43:17.635Z] Copying: 491/1024 [MB] (31 MBps) [2024-11-20T11:43:18.571Z] Copying: 521/1024 [MB] (29 MBps) [2024-11-20T11:43:19.508Z] Copying: 551/1024 [MB] (30 MBps) [2024-11-20T11:43:20.445Z] Copying: 582/1024 [MB] (30 MBps) [2024-11-20T11:43:21.382Z] Copying: 613/1024 [MB] (30 MBps) [2024-11-20T11:43:22.761Z] Copying: 643/1024 [MB] (30 MBps) [2024-11-20T11:43:23.707Z] Copying: 674/1024 [MB] (31 MBps) [2024-11-20T11:43:24.644Z] Copying: 704/1024 [MB] (29 MBps) [2024-11-20T11:43:25.580Z] Copying: 735/1024 [MB] (31 MBps) [2024-11-20T11:43:26.513Z] Copying: 766/1024 [MB] (30 MBps) [2024-11-20T11:43:27.453Z] Copying: 796/1024 [MB] (30 MBps) [2024-11-20T11:43:28.387Z] Copying: 827/1024 [MB] (30 MBps) [2024-11-20T11:43:29.764Z] Copying: 858/1024 [MB] (31 MBps) [2024-11-20T11:43:30.699Z] Copying: 889/1024 [MB] (31 MBps) [2024-11-20T11:43:31.636Z] Copying: 921/1024 [MB] (31 MBps) [2024-11-20T11:43:32.572Z] Copying: 952/1024 [MB] (31 MBps) [2024-11-20T11:43:33.510Z] Copying: 982/1024 [MB] (30 MBps) [2024-11-20T11:43:33.769Z] Copying: 1012/1024 [MB] (29 MBps) [2024-11-20T11:43:34.028Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-20 11:43:33.795912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.266 [2024-11-20 11:43:33.796248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:28.266 [2024-11-20 11:43:33.796284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:28.266 [2024-11-20 11:43:33.796298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.266 [2024-11-20 11:43:33.796355] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:28.266 [2024-11-20 11:43:33.801784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.266 [2024-11-20 11:43:33.801824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:28.266 [2024-11-20 11:43:33.801840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.406 ms 00:29:28.266 [2024-11-20 11:43:33.801852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.266 [2024-11-20 11:43:33.802079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.266 [2024-11-20 11:43:33.802094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:28.266 [2024-11-20 11:43:33.802108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:29:28.266 [2024-11-20 11:43:33.802121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.266 [2024-11-20 11:43:33.806202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.266 [2024-11-20 11:43:33.806352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:28.266 [2024-11-20 11:43:33.806441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.042 ms 00:29:28.266 [2024-11-20 11:43:33.806504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.266 [2024-11-20 11:43:33.812513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.267 [2024-11-20 11:43:33.812663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:28.267 [2024-11-20 11:43:33.812758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.938 ms 00:29:28.267 [2024-11-20 11:43:33.812799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.267 [2024-11-20 11:43:33.850577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.267 [2024-11-20 11:43:33.850730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:28.267 [2024-11-20 11:43:33.850818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.688 ms 00:29:28.267 [2024-11-20 11:43:33.850859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.267 [2024-11-20 11:43:33.887161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.267 [2024-11-20 11:43:33.887430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:28.267 [2024-11-20 11:43:33.887600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.232 ms 00:29:28.267 [2024-11-20 11:43:33.887664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.267 [2024-11-20 11:43:34.008674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.267 [2024-11-20 11:43:34.008910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:28.267 [2024-11-20 11:43:34.009070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 120.819 ms 00:29:28.267 [2024-11-20 11:43:34.009148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.526 [2024-11-20 11:43:34.068872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.526 [2024-11-20 11:43:34.069104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:28.526 [2024-11-20 11:43:34.069221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.645 ms 00:29:28.526 [2024-11-20 11:43:34.069278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.526 [2024-11-20 11:43:34.111303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.526 [2024-11-20 11:43:34.111444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:28.526 [2024-11-20 11:43:34.111623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.936 ms 00:29:28.526 [2024-11-20 11:43:34.111664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.526 [2024-11-20 11:43:34.148638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.526 [2024-11-20 11:43:34.148815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:28.526 [2024-11-20 11:43:34.148887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.903 ms 00:29:28.526 [2024-11-20 11:43:34.148923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.526 [2024-11-20 11:43:34.185682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.526 [2024-11-20 11:43:34.185841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:28.526 [2024-11-20 11:43:34.185862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.659 ms 00:29:28.526 [2024-11-20 11:43:34.185874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.526 [2024-11-20 11:43:34.185954] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:28.526 [2024-11-20 11:43:34.185974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:29:28.526 [2024-11-20 11:43:34.185987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:28.526 [2024-11-20 11:43:34.185999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:28.526 [2024-11-20 11:43:34.186011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:28.526 [2024-11-20 11:43:34.186023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:28.526 [2024-11-20 11:43:34.186034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:28.526 [2024-11-20 11:43:34.186045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:28.526 [2024-11-20 11:43:34.186056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:28.526 [2024-11-20 11:43:34.186067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:28.526 [2024-11-20 11:43:34.186078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:28.526 [2024-11-20 11:43:34.186090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.186994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.187005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.187016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.187026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.187037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.187047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.187057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.187068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:28.527 [2024-11-20 11:43:34.187079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:28.528 [2024-11-20 11:43:34.187097] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:28.528 [2024-11-20 11:43:34.187117] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 35c6d13a-9b5f-4be9-a9d4-969633558956 00:29:28.528 [2024-11-20 11:43:34.187128] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:29:28.528 [2024-11-20 11:43:34.187139] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 11968 00:29:28.528 [2024-11-20 11:43:34.187149] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 11008 00:29:28.528 [2024-11-20 11:43:34.187160] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0872 00:29:28.528 [2024-11-20 11:43:34.187170] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:28.528 [2024-11-20 11:43:34.187187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:28.528 [2024-11-20 11:43:34.187197] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:28.528 [2024-11-20 11:43:34.187221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:28.528 [2024-11-20 11:43:34.187230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:28.528 [2024-11-20 11:43:34.187240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.528 [2024-11-20 11:43:34.187251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:28.528 [2024-11-20 11:43:34.187263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.287 ms 00:29:28.528 [2024-11-20 11:43:34.187273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.528 [2024-11-20 11:43:34.207694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.528 [2024-11-20 11:43:34.207822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:28.528 [2024-11-20 11:43:34.207859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.382 ms 00:29:28.528 [2024-11-20 11:43:34.207877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.528 [2024-11-20 11:43:34.208423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.528 [2024-11-20 11:43:34.208434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:28.528 [2024-11-20 11:43:34.208445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:29:28.528 [2024-11-20 11:43:34.208455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.528 [2024-11-20 11:43:34.261313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.528 [2024-11-20 11:43:34.261351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:28.528 [2024-11-20 11:43:34.261369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.528 [2024-11-20 11:43:34.261379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.528 [2024-11-20 11:43:34.261439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.528 [2024-11-20 11:43:34.261450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:28.528 [2024-11-20 11:43:34.261460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.528 [2024-11-20 11:43:34.261481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.528 [2024-11-20 11:43:34.261569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.528 [2024-11-20 11:43:34.261583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:28.528 [2024-11-20 11:43:34.261594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.528 [2024-11-20 11:43:34.261608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.528 [2024-11-20 11:43:34.261625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.528 [2024-11-20 11:43:34.261635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:28.528 [2024-11-20 11:43:34.261646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.528 [2024-11-20 11:43:34.261655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.787 [2024-11-20 11:43:34.385560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.787 [2024-11-20 11:43:34.385623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:28.787 [2024-11-20 11:43:34.385645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.787 [2024-11-20 11:43:34.385655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.787 [2024-11-20 11:43:34.486080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.787 [2024-11-20 11:43:34.486140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:28.787 [2024-11-20 11:43:34.486156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.787 [2024-11-20 11:43:34.486167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.787 [2024-11-20 11:43:34.486280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.787 [2024-11-20 11:43:34.486292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:28.787 [2024-11-20 11:43:34.486303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.787 [2024-11-20 11:43:34.486314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.787 [2024-11-20 11:43:34.486379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.787 [2024-11-20 11:43:34.486391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:28.787 [2024-11-20 11:43:34.486401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.787 [2024-11-20 11:43:34.486410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.787 [2024-11-20 11:43:34.486545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.787 [2024-11-20 11:43:34.486559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:28.787 [2024-11-20 11:43:34.486571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.787 [2024-11-20 11:43:34.486581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.787 [2024-11-20 11:43:34.486621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.787 [2024-11-20 11:43:34.486634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:28.787 [2024-11-20 11:43:34.486644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.787 [2024-11-20 11:43:34.486654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.787 [2024-11-20 11:43:34.486695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.787 [2024-11-20 11:43:34.486707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:28.787 [2024-11-20 11:43:34.486717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.787 [2024-11-20 11:43:34.486727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.787 [2024-11-20 11:43:34.486776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:28.787 [2024-11-20 11:43:34.486788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:28.787 [2024-11-20 11:43:34.486799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:28.787 [2024-11-20 11:43:34.486809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.787 [2024-11-20 11:43:34.486974] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 690.986 ms, result 0 00:29:30.165 00:29:30.165 00:29:30.165 11:43:35 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:32.072 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:32.072 11:43:37 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:32.072 11:43:37 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:29:32.072 11:43:37 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:32.072 11:43:37 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:32.072 11:43:37 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:32.072 11:43:37 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79757 00:29:32.072 11:43:37 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79757 ']' 00:29:32.072 11:43:37 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79757 00:29:32.072 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79757) - No such process 00:29:32.072 Process with pid 79757 is not found 00:29:32.072 11:43:37 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79757 is not found' 00:29:32.072 Remove shared memory files 00:29:32.072 11:43:37 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:29:32.072 11:43:37 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:32.072 11:43:37 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:29:32.072 11:43:37 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:29:32.072 11:43:37 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:29:32.072 11:43:37 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:32.072 11:43:37 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:29:32.072 ************************************ 00:29:32.072 END TEST ftl_restore 00:29:32.072 ************************************ 00:29:32.072 00:29:32.072 real 2m54.896s 00:29:32.072 user 2m41.593s 00:29:32.072 sys 0m15.305s 00:29:32.072 11:43:37 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:32.072 11:43:37 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:29:32.072 11:43:37 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:29:32.072 11:43:37 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:32.072 11:43:37 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:32.072 11:43:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:32.072 ************************************ 00:29:32.072 START TEST ftl_dirty_shutdown 00:29:32.072 ************************************ 00:29:32.072 11:43:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:29:32.072 * Looking for test storage... 00:29:32.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:32.072 11:43:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:32.072 11:43:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:32.072 11:43:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:32.072 11:43:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:32.072 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:32.072 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:32.072 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:32.072 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:32.072 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:32.072 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:32.072 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:32.072 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:32.072 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:32.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.073 --rc genhtml_branch_coverage=1 00:29:32.073 --rc genhtml_function_coverage=1 00:29:32.073 --rc genhtml_legend=1 00:29:32.073 --rc geninfo_all_blocks=1 00:29:32.073 --rc geninfo_unexecuted_blocks=1 00:29:32.073 00:29:32.073 ' 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:32.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.073 --rc genhtml_branch_coverage=1 00:29:32.073 --rc genhtml_function_coverage=1 00:29:32.073 --rc genhtml_legend=1 00:29:32.073 --rc geninfo_all_blocks=1 00:29:32.073 --rc geninfo_unexecuted_blocks=1 00:29:32.073 00:29:32.073 ' 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:32.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.073 --rc genhtml_branch_coverage=1 00:29:32.073 --rc genhtml_function_coverage=1 00:29:32.073 --rc genhtml_legend=1 00:29:32.073 --rc geninfo_all_blocks=1 00:29:32.073 --rc geninfo_unexecuted_blocks=1 00:29:32.073 00:29:32.073 ' 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:32.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.073 --rc genhtml_branch_coverage=1 00:29:32.073 --rc genhtml_function_coverage=1 00:29:32.073 --rc genhtml_legend=1 00:29:32.073 --rc geninfo_all_blocks=1 00:29:32.073 --rc geninfo_unexecuted_blocks=1 00:29:32.073 00:29:32.073 ' 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81604 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:29:32.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81604 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81604 ']' 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.073 11:43:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:32.333 [2024-11-20 11:43:37.920808] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:29:32.333 [2024-11-20 11:43:37.920983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81604 ] 00:29:32.592 [2024-11-20 11:43:38.121580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.592 [2024-11-20 11:43:38.292488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.528 11:43:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.528 11:43:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:33.528 11:43:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:33.528 11:43:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:29:33.528 11:43:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:33.528 11:43:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:29:33.528 11:43:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:33.528 11:43:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:33.787 11:43:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:33.787 11:43:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:33.787 11:43:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:33.787 11:43:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:29:33.787 11:43:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:33.787 11:43:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:33.788 11:43:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:33.788 11:43:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:34.355 11:43:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:34.355 { 00:29:34.355 "name": "nvme0n1", 00:29:34.355 "aliases": [ 00:29:34.355 "6c8f8ddd-5427-4a7d-8345-907f591a74ce" 00:29:34.355 ], 00:29:34.355 "product_name": "NVMe disk", 00:29:34.355 "block_size": 4096, 00:29:34.355 "num_blocks": 1310720, 00:29:34.355 "uuid": "6c8f8ddd-5427-4a7d-8345-907f591a74ce", 00:29:34.355 "numa_id": -1, 00:29:34.355 "assigned_rate_limits": { 00:29:34.355 "rw_ios_per_sec": 0, 00:29:34.355 "rw_mbytes_per_sec": 0, 00:29:34.355 "r_mbytes_per_sec": 0, 00:29:34.355 "w_mbytes_per_sec": 0 00:29:34.355 }, 00:29:34.355 "claimed": true, 00:29:34.355 "claim_type": "read_many_write_one", 00:29:34.355 "zoned": false, 00:29:34.355 "supported_io_types": { 00:29:34.355 "read": true, 00:29:34.355 "write": true, 00:29:34.355 "unmap": true, 00:29:34.355 "flush": true, 00:29:34.355 "reset": true, 00:29:34.355 "nvme_admin": true, 00:29:34.355 "nvme_io": true, 00:29:34.355 "nvme_io_md": false, 00:29:34.355 "write_zeroes": true, 00:29:34.355 "zcopy": false, 00:29:34.355 "get_zone_info": false, 00:29:34.355 "zone_management": false, 00:29:34.355 "zone_append": false, 00:29:34.356 "compare": true, 00:29:34.356 "compare_and_write": false, 00:29:34.356 "abort": true, 00:29:34.356 "seek_hole": false, 00:29:34.356 "seek_data": false, 00:29:34.356 "copy": true, 00:29:34.356 "nvme_iov_md": false 00:29:34.356 }, 00:29:34.356 "driver_specific": { 00:29:34.356 "nvme": [ 00:29:34.356 { 00:29:34.356 "pci_address": "0000:00:11.0", 00:29:34.356 "trid": { 00:29:34.356 "trtype": "PCIe", 00:29:34.356 "traddr": "0000:00:11.0" 00:29:34.356 }, 00:29:34.356 "ctrlr_data": { 00:29:34.356 "cntlid": 0, 00:29:34.356 "vendor_id": "0x1b36", 00:29:34.356 "model_number": "QEMU NVMe Ctrl", 00:29:34.356 "serial_number": "12341", 00:29:34.356 "firmware_revision": "8.0.0", 00:29:34.356 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:34.356 "oacs": { 00:29:34.356 "security": 0, 00:29:34.356 "format": 1, 00:29:34.356 "firmware": 0, 00:29:34.356 "ns_manage": 1 00:29:34.356 }, 00:29:34.356 "multi_ctrlr": false, 00:29:34.356 "ana_reporting": false 00:29:34.356 }, 00:29:34.356 "vs": { 00:29:34.356 "nvme_version": "1.4" 00:29:34.356 }, 00:29:34.356 "ns_data": { 00:29:34.356 "id": 1, 00:29:34.356 "can_share": false 00:29:34.356 } 00:29:34.356 } 00:29:34.356 ], 00:29:34.356 "mp_policy": "active_passive" 00:29:34.356 } 00:29:34.356 } 00:29:34.356 ]' 00:29:34.356 11:43:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:34.356 11:43:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:34.356 11:43:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:34.356 11:43:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:34.356 11:43:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:34.356 11:43:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:34.356 11:43:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:34.356 11:43:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:34.356 11:43:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:34.356 11:43:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:34.356 11:43:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:34.614 11:43:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=7147db90-1f03-4a67-b95c-de504d435f58 00:29:34.614 11:43:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:34.614 11:43:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7147db90-1f03-4a67-b95c-de504d435f58 00:29:34.873 11:43:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:35.133 11:43:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=08ba8b23-f709-40ee-9f29-586ab37c03e3 00:29:35.133 11:43:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 08ba8b23-f709-40ee-9f29-586ab37c03e3 00:29:35.133 11:43:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=73954dfa-7063-4eb1-8a54-bd67b6035a75 00:29:35.133 11:43:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:29:35.133 11:43:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 73954dfa-7063-4eb1-8a54-bd67b6035a75 00:29:35.133 11:43:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:29:35.133 11:43:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:35.133 11:43:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=73954dfa-7063-4eb1-8a54-bd67b6035a75 00:29:35.133 11:43:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:29:35.133 11:43:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 73954dfa-7063-4eb1-8a54-bd67b6035a75 00:29:35.133 11:43:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=73954dfa-7063-4eb1-8a54-bd67b6035a75 00:29:35.133 11:43:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:35.133 11:43:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:35.133 11:43:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:35.133 11:43:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 73954dfa-7063-4eb1-8a54-bd67b6035a75 00:29:35.392 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:35.392 { 00:29:35.392 "name": "73954dfa-7063-4eb1-8a54-bd67b6035a75", 00:29:35.392 "aliases": [ 00:29:35.392 "lvs/nvme0n1p0" 00:29:35.392 ], 00:29:35.392 "product_name": "Logical Volume", 00:29:35.392 "block_size": 4096, 00:29:35.392 "num_blocks": 26476544, 00:29:35.392 "uuid": "73954dfa-7063-4eb1-8a54-bd67b6035a75", 00:29:35.392 "assigned_rate_limits": { 00:29:35.392 "rw_ios_per_sec": 0, 00:29:35.392 "rw_mbytes_per_sec": 0, 00:29:35.392 "r_mbytes_per_sec": 0, 00:29:35.392 "w_mbytes_per_sec": 0 00:29:35.392 }, 00:29:35.392 "claimed": false, 00:29:35.392 "zoned": false, 00:29:35.392 "supported_io_types": { 00:29:35.392 "read": true, 00:29:35.392 "write": true, 00:29:35.392 "unmap": true, 00:29:35.392 "flush": false, 00:29:35.392 "reset": true, 00:29:35.392 "nvme_admin": false, 00:29:35.392 "nvme_io": false, 00:29:35.392 "nvme_io_md": false, 00:29:35.392 "write_zeroes": true, 00:29:35.392 "zcopy": false, 00:29:35.392 "get_zone_info": false, 00:29:35.392 "zone_management": false, 00:29:35.392 "zone_append": false, 00:29:35.392 "compare": false, 00:29:35.392 "compare_and_write": false, 00:29:35.392 "abort": false, 00:29:35.392 "seek_hole": true, 00:29:35.392 "seek_data": true, 00:29:35.392 "copy": false, 00:29:35.392 "nvme_iov_md": false 00:29:35.392 }, 00:29:35.392 "driver_specific": { 00:29:35.392 "lvol": { 00:29:35.392 "lvol_store_uuid": "08ba8b23-f709-40ee-9f29-586ab37c03e3", 00:29:35.392 "base_bdev": "nvme0n1", 00:29:35.392 "thin_provision": true, 00:29:35.392 "num_allocated_clusters": 0, 00:29:35.392 "snapshot": false, 00:29:35.392 "clone": false, 00:29:35.392 "esnap_clone": false 00:29:35.392 } 00:29:35.392 } 00:29:35.392 } 00:29:35.392 ]' 00:29:35.392 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:35.392 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:35.392 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:35.392 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:35.392 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:35.392 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:35.392 11:43:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:29:35.392 11:43:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:35.392 11:43:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:35.652 11:43:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:35.652 11:43:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:35.652 11:43:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 73954dfa-7063-4eb1-8a54-bd67b6035a75 00:29:35.652 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=73954dfa-7063-4eb1-8a54-bd67b6035a75 00:29:35.652 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:35.652 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:35.652 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:35.652 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 73954dfa-7063-4eb1-8a54-bd67b6035a75 00:29:35.911 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:35.911 { 00:29:35.911 "name": "73954dfa-7063-4eb1-8a54-bd67b6035a75", 00:29:35.911 "aliases": [ 00:29:35.911 "lvs/nvme0n1p0" 00:29:35.911 ], 00:29:35.911 "product_name": "Logical Volume", 00:29:35.911 "block_size": 4096, 00:29:35.911 "num_blocks": 26476544, 00:29:35.911 "uuid": "73954dfa-7063-4eb1-8a54-bd67b6035a75", 00:29:35.911 "assigned_rate_limits": { 00:29:35.911 "rw_ios_per_sec": 0, 00:29:35.911 "rw_mbytes_per_sec": 0, 00:29:35.911 "r_mbytes_per_sec": 0, 00:29:35.911 "w_mbytes_per_sec": 0 00:29:35.911 }, 00:29:35.911 "claimed": false, 00:29:35.911 "zoned": false, 00:29:35.911 "supported_io_types": { 00:29:35.911 "read": true, 00:29:35.911 "write": true, 00:29:35.911 "unmap": true, 00:29:35.911 "flush": false, 00:29:35.911 "reset": true, 00:29:35.911 "nvme_admin": false, 00:29:35.911 "nvme_io": false, 00:29:35.911 "nvme_io_md": false, 00:29:35.911 "write_zeroes": true, 00:29:35.911 "zcopy": false, 00:29:35.911 "get_zone_info": false, 00:29:35.911 "zone_management": false, 00:29:35.911 "zone_append": false, 00:29:35.911 "compare": false, 00:29:35.911 "compare_and_write": false, 00:29:35.911 "abort": false, 00:29:35.911 "seek_hole": true, 00:29:35.911 "seek_data": true, 00:29:35.911 "copy": false, 00:29:35.911 "nvme_iov_md": false 00:29:35.911 }, 00:29:35.911 "driver_specific": { 00:29:35.911 "lvol": { 00:29:35.911 "lvol_store_uuid": "08ba8b23-f709-40ee-9f29-586ab37c03e3", 00:29:35.911 "base_bdev": "nvme0n1", 00:29:35.911 "thin_provision": true, 00:29:35.911 "num_allocated_clusters": 0, 00:29:35.911 "snapshot": false, 00:29:35.911 "clone": false, 00:29:35.911 "esnap_clone": false 00:29:35.911 } 00:29:35.911 } 00:29:35.911 } 00:29:35.911 ]' 00:29:35.911 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:35.911 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:35.911 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:36.170 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:36.170 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:36.170 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:36.170 11:43:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:29:36.170 11:43:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:36.170 11:43:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:29:36.170 11:43:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 73954dfa-7063-4eb1-8a54-bd67b6035a75 00:29:36.170 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=73954dfa-7063-4eb1-8a54-bd67b6035a75 00:29:36.170 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:36.170 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:36.170 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:36.170 11:43:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 73954dfa-7063-4eb1-8a54-bd67b6035a75 00:29:36.429 11:43:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:36.429 { 00:29:36.429 "name": "73954dfa-7063-4eb1-8a54-bd67b6035a75", 00:29:36.429 "aliases": [ 00:29:36.429 "lvs/nvme0n1p0" 00:29:36.429 ], 00:29:36.429 "product_name": "Logical Volume", 00:29:36.429 "block_size": 4096, 00:29:36.429 "num_blocks": 26476544, 00:29:36.429 "uuid": "73954dfa-7063-4eb1-8a54-bd67b6035a75", 00:29:36.429 "assigned_rate_limits": { 00:29:36.429 "rw_ios_per_sec": 0, 00:29:36.429 "rw_mbytes_per_sec": 0, 00:29:36.429 "r_mbytes_per_sec": 0, 00:29:36.429 "w_mbytes_per_sec": 0 00:29:36.429 }, 00:29:36.429 "claimed": false, 00:29:36.429 "zoned": false, 00:29:36.429 "supported_io_types": { 00:29:36.429 "read": true, 00:29:36.429 "write": true, 00:29:36.429 "unmap": true, 00:29:36.429 "flush": false, 00:29:36.429 "reset": true, 00:29:36.429 "nvme_admin": false, 00:29:36.429 "nvme_io": false, 00:29:36.429 "nvme_io_md": false, 00:29:36.429 "write_zeroes": true, 00:29:36.429 "zcopy": false, 00:29:36.429 "get_zone_info": false, 00:29:36.429 "zone_management": false, 00:29:36.429 "zone_append": false, 00:29:36.429 "compare": false, 00:29:36.429 "compare_and_write": false, 00:29:36.429 "abort": false, 00:29:36.429 "seek_hole": true, 00:29:36.429 "seek_data": true, 00:29:36.429 "copy": false, 00:29:36.429 "nvme_iov_md": false 00:29:36.429 }, 00:29:36.429 "driver_specific": { 00:29:36.429 "lvol": { 00:29:36.429 "lvol_store_uuid": "08ba8b23-f709-40ee-9f29-586ab37c03e3", 00:29:36.429 "base_bdev": "nvme0n1", 00:29:36.429 "thin_provision": true, 00:29:36.429 "num_allocated_clusters": 0, 00:29:36.429 "snapshot": false, 00:29:36.429 "clone": false, 00:29:36.429 "esnap_clone": false 00:29:36.429 } 00:29:36.429 } 00:29:36.429 } 00:29:36.429 ]' 00:29:36.429 11:43:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:36.429 11:43:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:36.429 11:43:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:36.429 11:43:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:36.429 11:43:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:36.429 11:43:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:36.429 11:43:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:29:36.429 11:43:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 73954dfa-7063-4eb1-8a54-bd67b6035a75 --l2p_dram_limit 10' 00:29:36.429 11:43:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:29:36.429 11:43:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:29:36.429 11:43:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:36.429 11:43:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 73954dfa-7063-4eb1-8a54-bd67b6035a75 --l2p_dram_limit 10 -c nvc0n1p0 00:29:36.689 [2024-11-20 11:43:42.411191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.689 [2024-11-20 11:43:42.411253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:36.689 [2024-11-20 11:43:42.411290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:36.689 [2024-11-20 11:43:42.411312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.689 [2024-11-20 11:43:42.411386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.689 [2024-11-20 11:43:42.411399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:36.689 [2024-11-20 11:43:42.411413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:29:36.689 [2024-11-20 11:43:42.411423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.689 [2024-11-20 11:43:42.411454] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:36.689 [2024-11-20 11:43:42.412531] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:36.689 [2024-11-20 11:43:42.412701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.690 [2024-11-20 11:43:42.412719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:36.690 [2024-11-20 11:43:42.412734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:29:36.690 [2024-11-20 11:43:42.412744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.690 [2024-11-20 11:43:42.412897] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID bf3c597d-5cce-48af-a3f6-c932c9b7fc69 00:29:36.690 [2024-11-20 11:43:42.414337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.690 [2024-11-20 11:43:42.414368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:36.690 [2024-11-20 11:43:42.414381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:29:36.690 [2024-11-20 11:43:42.414396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.690 [2024-11-20 11:43:42.421914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.690 [2024-11-20 11:43:42.421950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:36.690 [2024-11-20 11:43:42.421966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.472 ms 00:29:36.690 [2024-11-20 11:43:42.421979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.690 [2024-11-20 11:43:42.422083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.690 [2024-11-20 11:43:42.422100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:36.690 [2024-11-20 11:43:42.422112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:29:36.690 [2024-11-20 11:43:42.422129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.690 [2024-11-20 11:43:42.422207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.690 [2024-11-20 11:43:42.422223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:36.690 [2024-11-20 11:43:42.422234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:36.690 [2024-11-20 11:43:42.422250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.690 [2024-11-20 11:43:42.422277] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:36.690 [2024-11-20 11:43:42.427809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.690 [2024-11-20 11:43:42.427844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:36.690 [2024-11-20 11:43:42.427858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.536 ms 00:29:36.690 [2024-11-20 11:43:42.427868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.690 [2024-11-20 11:43:42.427905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.690 [2024-11-20 11:43:42.427915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:36.690 [2024-11-20 11:43:42.427928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:36.690 [2024-11-20 11:43:42.427937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.690 [2024-11-20 11:43:42.427974] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:36.690 [2024-11-20 11:43:42.428095] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:36.690 [2024-11-20 11:43:42.428114] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:36.690 [2024-11-20 11:43:42.428127] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:36.690 [2024-11-20 11:43:42.428142] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:36.690 [2024-11-20 11:43:42.428154] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:36.690 [2024-11-20 11:43:42.428166] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:36.690 [2024-11-20 11:43:42.428176] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:36.690 [2024-11-20 11:43:42.428191] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:36.690 [2024-11-20 11:43:42.428200] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:36.690 [2024-11-20 11:43:42.428212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.690 [2024-11-20 11:43:42.428222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:36.690 [2024-11-20 11:43:42.428235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:29:36.690 [2024-11-20 11:43:42.428255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.690 [2024-11-20 11:43:42.428327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.690 [2024-11-20 11:43:42.428337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:36.690 [2024-11-20 11:43:42.428349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:36.690 [2024-11-20 11:43:42.428359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.690 [2024-11-20 11:43:42.428452] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:36.690 [2024-11-20 11:43:42.428465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:36.690 [2024-11-20 11:43:42.428495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:36.690 [2024-11-20 11:43:42.428521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:36.690 [2024-11-20 11:43:42.428534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:36.690 [2024-11-20 11:43:42.428544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:36.690 [2024-11-20 11:43:42.428556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:36.690 [2024-11-20 11:43:42.428565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:36.690 [2024-11-20 11:43:42.428577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:36.690 [2024-11-20 11:43:42.428586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:36.690 [2024-11-20 11:43:42.428598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:36.690 [2024-11-20 11:43:42.428608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:36.690 [2024-11-20 11:43:42.428620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:36.690 [2024-11-20 11:43:42.428630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:36.690 [2024-11-20 11:43:42.428641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:36.690 [2024-11-20 11:43:42.428650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:36.690 [2024-11-20 11:43:42.428666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:36.690 [2024-11-20 11:43:42.428676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:36.690 [2024-11-20 11:43:42.428687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:36.690 [2024-11-20 11:43:42.428696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:36.690 [2024-11-20 11:43:42.428708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:36.690 [2024-11-20 11:43:42.428717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:36.690 [2024-11-20 11:43:42.428729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:36.690 [2024-11-20 11:43:42.428738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:36.690 [2024-11-20 11:43:42.428749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:36.690 [2024-11-20 11:43:42.428759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:36.690 [2024-11-20 11:43:42.428771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:36.690 [2024-11-20 11:43:42.428780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:36.690 [2024-11-20 11:43:42.428792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:36.690 [2024-11-20 11:43:42.428801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:36.690 [2024-11-20 11:43:42.428813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:36.690 [2024-11-20 11:43:42.428822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:36.690 [2024-11-20 11:43:42.428836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:36.690 [2024-11-20 11:43:42.428845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:36.690 [2024-11-20 11:43:42.428856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:36.690 [2024-11-20 11:43:42.428866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:36.690 [2024-11-20 11:43:42.428877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:36.690 [2024-11-20 11:43:42.428886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:36.690 [2024-11-20 11:43:42.428898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:36.690 [2024-11-20 11:43:42.428907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:36.690 [2024-11-20 11:43:42.428919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:36.690 [2024-11-20 11:43:42.428928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:36.690 [2024-11-20 11:43:42.428939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:36.690 [2024-11-20 11:43:42.428948] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:36.690 [2024-11-20 11:43:42.428962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:36.690 [2024-11-20 11:43:42.428972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:36.691 [2024-11-20 11:43:42.428985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:36.691 [2024-11-20 11:43:42.428996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:36.691 [2024-11-20 11:43:42.429011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:36.691 [2024-11-20 11:43:42.429020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:36.691 [2024-11-20 11:43:42.429033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:36.691 [2024-11-20 11:43:42.429042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:36.691 [2024-11-20 11:43:42.429053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:36.691 [2024-11-20 11:43:42.429067] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:36.691 [2024-11-20 11:43:42.429091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:36.691 [2024-11-20 11:43:42.429122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:36.691 [2024-11-20 11:43:42.429136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:36.691 [2024-11-20 11:43:42.429146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:36.691 [2024-11-20 11:43:42.429161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:36.691 [2024-11-20 11:43:42.429172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:36.691 [2024-11-20 11:43:42.429185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:36.691 [2024-11-20 11:43:42.429195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:36.691 [2024-11-20 11:43:42.429208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:36.691 [2024-11-20 11:43:42.429219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:36.691 [2024-11-20 11:43:42.429234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:36.691 [2024-11-20 11:43:42.429245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:36.691 [2024-11-20 11:43:42.429260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:36.691 [2024-11-20 11:43:42.429270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:36.691 [2024-11-20 11:43:42.429283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:36.691 [2024-11-20 11:43:42.429294] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:36.691 [2024-11-20 11:43:42.429308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:36.691 [2024-11-20 11:43:42.429319] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:36.691 [2024-11-20 11:43:42.429332] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:36.691 [2024-11-20 11:43:42.429343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:36.691 [2024-11-20 11:43:42.429356] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:36.691 [2024-11-20 11:43:42.429367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.691 [2024-11-20 11:43:42.429381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:36.691 [2024-11-20 11:43:42.429391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:29:36.691 [2024-11-20 11:43:42.429404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.691 [2024-11-20 11:43:42.429449] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:36.691 [2024-11-20 11:43:42.429467] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:39.979 [2024-11-20 11:43:45.072951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.979 [2024-11-20 11:43:45.073241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:39.979 [2024-11-20 11:43:45.073333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2643.486 ms 00:29:39.979 [2024-11-20 11:43:45.073375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.979 [2024-11-20 11:43:45.111718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.979 [2024-11-20 11:43:45.111962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:39.979 [2024-11-20 11:43:45.112062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.995 ms 00:29:39.979 [2024-11-20 11:43:45.112104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.979 [2024-11-20 11:43:45.112273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.979 [2024-11-20 11:43:45.112436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:39.979 [2024-11-20 11:43:45.112529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:29:39.979 [2024-11-20 11:43:45.112569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.979 [2024-11-20 11:43:45.157229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.979 [2024-11-20 11:43:45.157418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:39.979 [2024-11-20 11:43:45.157548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.569 ms 00:29:39.979 [2024-11-20 11:43:45.157592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.979 [2024-11-20 11:43:45.157655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.979 [2024-11-20 11:43:45.157698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:39.979 [2024-11-20 11:43:45.157789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:39.979 [2024-11-20 11:43:45.157829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.979 [2024-11-20 11:43:45.158329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.979 [2024-11-20 11:43:45.158447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:39.979 [2024-11-20 11:43:45.158544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:29:39.979 [2024-11-20 11:43:45.158584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.979 [2024-11-20 11:43:45.158711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.979 [2024-11-20 11:43:45.158770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:39.980 [2024-11-20 11:43:45.158847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:29:39.980 [2024-11-20 11:43:45.158882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.980 [2024-11-20 11:43:45.179555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.980 [2024-11-20 11:43:45.179726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:39.980 [2024-11-20 11:43:45.179826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.631 ms 00:29:39.980 [2024-11-20 11:43:45.179866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.980 [2024-11-20 11:43:45.192099] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:39.980 [2024-11-20 11:43:45.195392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.980 [2024-11-20 11:43:45.195548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:39.980 [2024-11-20 11:43:45.195641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.409 ms 00:29:39.980 [2024-11-20 11:43:45.195678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.980 [2024-11-20 11:43:45.280153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.980 [2024-11-20 11:43:45.280411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:39.980 [2024-11-20 11:43:45.280549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.416 ms 00:29:39.980 [2024-11-20 11:43:45.280589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.980 [2024-11-20 11:43:45.280805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.980 [2024-11-20 11:43:45.280852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:39.980 [2024-11-20 11:43:45.280871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:29:39.980 [2024-11-20 11:43:45.280881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.980 [2024-11-20 11:43:45.316949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.980 [2024-11-20 11:43:45.317071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:39.980 [2024-11-20 11:43:45.317136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.005 ms 00:29:39.980 [2024-11-20 11:43:45.317148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.980 [2024-11-20 11:43:45.352988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.980 [2024-11-20 11:43:45.353023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:39.980 [2024-11-20 11:43:45.353041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.792 ms 00:29:39.980 [2024-11-20 11:43:45.353052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.980 [2024-11-20 11:43:45.353778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.980 [2024-11-20 11:43:45.353797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:39.980 [2024-11-20 11:43:45.353811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:29:39.980 [2024-11-20 11:43:45.353822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.980 [2024-11-20 11:43:45.451011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.980 [2024-11-20 11:43:45.451066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:39.980 [2024-11-20 11:43:45.451090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.123 ms 00:29:39.980 [2024-11-20 11:43:45.451101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.980 [2024-11-20 11:43:45.488364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.980 [2024-11-20 11:43:45.488529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:39.980 [2024-11-20 11:43:45.488573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.160 ms 00:29:39.980 [2024-11-20 11:43:45.488584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.980 [2024-11-20 11:43:45.524675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.980 [2024-11-20 11:43:45.524711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:39.980 [2024-11-20 11:43:45.524727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.020 ms 00:29:39.980 [2024-11-20 11:43:45.524737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.980 [2024-11-20 11:43:45.560310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.980 [2024-11-20 11:43:45.560346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:39.980 [2024-11-20 11:43:45.560362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.529 ms 00:29:39.980 [2024-11-20 11:43:45.560372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.980 [2024-11-20 11:43:45.560418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.980 [2024-11-20 11:43:45.560429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:39.980 [2024-11-20 11:43:45.560445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:39.980 [2024-11-20 11:43:45.560455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.980 [2024-11-20 11:43:45.560583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:39.980 [2024-11-20 11:43:45.560597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:39.980 [2024-11-20 11:43:45.560613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:39.980 [2024-11-20 11:43:45.560623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:39.980 [2024-11-20 11:43:45.561749] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3150.033 ms, result 0 00:29:39.980 { 00:29:39.980 "name": "ftl0", 00:29:39.980 "uuid": "bf3c597d-5cce-48af-a3f6-c932c9b7fc69" 00:29:39.980 } 00:29:39.980 11:43:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:29:39.980 11:43:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:40.240 11:43:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:29:40.240 11:43:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:29:40.240 11:43:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:29:40.501 /dev/nbd0 00:29:40.501 11:43:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:29:40.501 11:43:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:40.501 11:43:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:29:40.501 11:43:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:40.501 11:43:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:40.501 11:43:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:40.501 11:43:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:29:40.501 11:43:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:40.501 11:43:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:40.501 11:43:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:29:40.501 1+0 records in 00:29:40.501 1+0 records out 00:29:40.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349204 s, 11.7 MB/s 00:29:40.501 11:43:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:29:40.501 11:43:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:29:40.501 11:43:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:29:40.501 11:43:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:40.501 11:43:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:29:40.501 11:43:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:29:40.501 [2024-11-20 11:43:46.213685] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:29:40.501 [2024-11-20 11:43:46.213984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81752 ] 00:29:40.759 [2024-11-20 11:43:46.385208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.759 [2024-11-20 11:43:46.501221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.133  [2024-11-20T11:43:48.833Z] Copying: 200/1024 [MB] (200 MBps) [2024-11-20T11:43:50.210Z] Copying: 400/1024 [MB] (200 MBps) [2024-11-20T11:43:51.147Z] Copying: 601/1024 [MB] (201 MBps) [2024-11-20T11:43:52.084Z] Copying: 800/1024 [MB] (199 MBps) [2024-11-20T11:43:52.084Z] Copying: 988/1024 [MB] (187 MBps) [2024-11-20T11:43:53.463Z] Copying: 1024/1024 [MB] (average 197 MBps) 00:29:47.701 00:29:47.701 11:43:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:49.607 11:43:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:29:49.607 [2024-11-20 11:43:55.073898] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:29:49.607 [2024-11-20 11:43:55.074070] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81844 ] 00:29:49.607 [2024-11-20 11:43:55.274616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.866 [2024-11-20 11:43:55.416535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.243  [2024-11-20T11:43:57.942Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-20T11:43:58.878Z] Copying: 36/1024 [MB] (16 MBps) [2024-11-20T11:43:59.879Z] Copying: 53/1024 [MB] (17 MBps) [2024-11-20T11:44:00.824Z] Copying: 71/1024 [MB] (17 MBps) [2024-11-20T11:44:01.760Z] Copying: 90/1024 [MB] (18 MBps) [2024-11-20T11:44:03.138Z] Copying: 108/1024 [MB] (18 MBps) [2024-11-20T11:44:04.075Z] Copying: 126/1024 [MB] (18 MBps) [2024-11-20T11:44:05.012Z] Copying: 145/1024 [MB] (18 MBps) [2024-11-20T11:44:05.948Z] Copying: 164/1024 [MB] (19 MBps) [2024-11-20T11:44:06.885Z] Copying: 183/1024 [MB] (19 MBps) [2024-11-20T11:44:07.825Z] Copying: 202/1024 [MB] (19 MBps) [2024-11-20T11:44:08.761Z] Copying: 221/1024 [MB] (18 MBps) [2024-11-20T11:44:10.139Z] Copying: 240/1024 [MB] (18 MBps) [2024-11-20T11:44:11.075Z] Copying: 258/1024 [MB] (18 MBps) [2024-11-20T11:44:12.011Z] Copying: 277/1024 [MB] (19 MBps) [2024-11-20T11:44:12.948Z] Copying: 296/1024 [MB] (18 MBps) [2024-11-20T11:44:13.883Z] Copying: 315/1024 [MB] (18 MBps) [2024-11-20T11:44:14.818Z] Copying: 334/1024 [MB] (18 MBps) [2024-11-20T11:44:15.754Z] Copying: 353/1024 [MB] (18 MBps) [2024-11-20T11:44:17.131Z] Copying: 372/1024 [MB] (18 MBps) [2024-11-20T11:44:18.082Z] Copying: 391/1024 [MB] (18 MBps) [2024-11-20T11:44:19.033Z] Copying: 410/1024 [MB] (18 MBps) [2024-11-20T11:44:19.970Z] Copying: 428/1024 [MB] (18 MBps) [2024-11-20T11:44:20.905Z] Copying: 448/1024 [MB] (19 MBps) [2024-11-20T11:44:21.842Z] Copying: 466/1024 [MB] (18 MBps) [2024-11-20T11:44:22.780Z] Copying: 484/1024 [MB] (18 MBps) [2024-11-20T11:44:24.158Z] Copying: 503/1024 [MB] (18 MBps) [2024-11-20T11:44:25.097Z] Copying: 522/1024 [MB] (18 MBps) [2024-11-20T11:44:26.034Z] Copying: 540/1024 [MB] (18 MBps) [2024-11-20T11:44:26.970Z] Copying: 558/1024 [MB] (18 MBps) [2024-11-20T11:44:27.906Z] Copying: 577/1024 [MB] (18 MBps) [2024-11-20T11:44:28.843Z] Copying: 595/1024 [MB] (18 MBps) [2024-11-20T11:44:29.778Z] Copying: 614/1024 [MB] (18 MBps) [2024-11-20T11:44:31.161Z] Copying: 632/1024 [MB] (18 MBps) [2024-11-20T11:44:32.097Z] Copying: 651/1024 [MB] (18 MBps) [2024-11-20T11:44:33.032Z] Copying: 670/1024 [MB] (18 MBps) [2024-11-20T11:44:33.968Z] Copying: 688/1024 [MB] (18 MBps) [2024-11-20T11:44:34.904Z] Copying: 706/1024 [MB] (18 MBps) [2024-11-20T11:44:35.842Z] Copying: 725/1024 [MB] (18 MBps) [2024-11-20T11:44:36.778Z] Copying: 743/1024 [MB] (18 MBps) [2024-11-20T11:44:38.156Z] Copying: 762/1024 [MB] (18 MBps) [2024-11-20T11:44:39.093Z] Copying: 781/1024 [MB] (18 MBps) [2024-11-20T11:44:40.027Z] Copying: 800/1024 [MB] (18 MBps) [2024-11-20T11:44:40.964Z] Copying: 818/1024 [MB] (18 MBps) [2024-11-20T11:44:41.961Z] Copying: 836/1024 [MB] (18 MBps) [2024-11-20T11:44:42.897Z] Copying: 854/1024 [MB] (18 MBps) [2024-11-20T11:44:43.833Z] Copying: 873/1024 [MB] (18 MBps) [2024-11-20T11:44:44.767Z] Copying: 891/1024 [MB] (18 MBps) [2024-11-20T11:44:46.142Z] Copying: 909/1024 [MB] (18 MBps) [2024-11-20T11:44:47.079Z] Copying: 928/1024 [MB] (18 MBps) [2024-11-20T11:44:48.014Z] Copying: 946/1024 [MB] (18 MBps) [2024-11-20T11:44:48.982Z] Copying: 964/1024 [MB] (18 MBps) [2024-11-20T11:44:49.920Z] Copying: 982/1024 [MB] (17 MBps) [2024-11-20T11:44:50.858Z] Copying: 1000/1024 [MB] (18 MBps) [2024-11-20T11:44:51.117Z] Copying: 1018/1024 [MB] (18 MBps) [2024-11-20T11:44:52.492Z] Copying: 1024/1024 [MB] (average 18 MBps) 00:30:46.730 00:30:46.730 11:44:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:30:46.730 11:44:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:30:46.988 11:44:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:46.988 [2024-11-20 11:44:52.667983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.988 [2024-11-20 11:44:52.668046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:46.988 [2024-11-20 11:44:52.668065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:46.988 [2024-11-20 11:44:52.668078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.988 [2024-11-20 11:44:52.668106] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:46.988 [2024-11-20 11:44:52.672314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.988 [2024-11-20 11:44:52.672347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:46.988 [2024-11-20 11:44:52.672366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.183 ms 00:30:46.988 [2024-11-20 11:44:52.672376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.988 [2024-11-20 11:44:52.674465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.988 [2024-11-20 11:44:52.674514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:46.988 [2024-11-20 11:44:52.674531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.054 ms 00:30:46.988 [2024-11-20 11:44:52.674542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.988 [2024-11-20 11:44:52.690114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.988 [2024-11-20 11:44:52.690292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:46.988 [2024-11-20 11:44:52.690320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.546 ms 00:30:46.988 [2024-11-20 11:44:52.690331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.988 [2024-11-20 11:44:52.695420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.988 [2024-11-20 11:44:52.695452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:46.988 [2024-11-20 11:44:52.695467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.044 ms 00:30:46.988 [2024-11-20 11:44:52.695488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:46.988 [2024-11-20 11:44:52.731220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:46.988 [2024-11-20 11:44:52.731256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:46.988 [2024-11-20 11:44:52.731272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.652 ms 00:30:46.988 [2024-11-20 11:44:52.731298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.248 [2024-11-20 11:44:52.753897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.248 [2024-11-20 11:44:52.753936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:47.248 [2024-11-20 11:44:52.753969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.550 ms 00:30:47.248 [2024-11-20 11:44:52.753983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.248 [2024-11-20 11:44:52.754135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.248 [2024-11-20 11:44:52.754149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:47.248 [2024-11-20 11:44:52.754164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:30:47.248 [2024-11-20 11:44:52.754174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.248 [2024-11-20 11:44:52.790378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.248 [2024-11-20 11:44:52.790413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:47.248 [2024-11-20 11:44:52.790429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.171 ms 00:30:47.248 [2024-11-20 11:44:52.790439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.248 [2024-11-20 11:44:52.826547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.248 [2024-11-20 11:44:52.826581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:47.248 [2024-11-20 11:44:52.826597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.055 ms 00:30:47.248 [2024-11-20 11:44:52.826606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.248 [2024-11-20 11:44:52.860896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.248 [2024-11-20 11:44:52.861051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:47.248 [2024-11-20 11:44:52.861101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.244 ms 00:30:47.248 [2024-11-20 11:44:52.861111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.248 [2024-11-20 11:44:52.897944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.248 [2024-11-20 11:44:52.897982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:47.248 [2024-11-20 11:44:52.897999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.731 ms 00:30:47.248 [2024-11-20 11:44:52.898009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.248 [2024-11-20 11:44:52.898053] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:47.248 [2024-11-20 11:44:52.898071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:47.248 [2024-11-20 11:44:52.898559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.898999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:47.249 [2024-11-20 11:44:52.899355] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:47.249 [2024-11-20 11:44:52.899367] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bf3c597d-5cce-48af-a3f6-c932c9b7fc69 00:30:47.249 [2024-11-20 11:44:52.899379] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:47.249 [2024-11-20 11:44:52.899393] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:47.249 [2024-11-20 11:44:52.899403] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:47.249 [2024-11-20 11:44:52.899419] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:47.249 [2024-11-20 11:44:52.899428] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:47.249 [2024-11-20 11:44:52.899441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:47.249 [2024-11-20 11:44:52.899451] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:47.249 [2024-11-20 11:44:52.899463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:47.249 [2024-11-20 11:44:52.899481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:47.249 [2024-11-20 11:44:52.899494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.249 [2024-11-20 11:44:52.899504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:47.249 [2024-11-20 11:44:52.899517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.443 ms 00:30:47.249 [2024-11-20 11:44:52.899534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.249 [2024-11-20 11:44:52.920880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.249 [2024-11-20 11:44:52.920912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:47.249 [2024-11-20 11:44:52.920932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.286 ms 00:30:47.249 [2024-11-20 11:44:52.920942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.249 [2024-11-20 11:44:52.921532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.249 [2024-11-20 11:44:52.921594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:47.249 [2024-11-20 11:44:52.921612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:30:47.249 [2024-11-20 11:44:52.921623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.249 [2024-11-20 11:44:52.989959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.249 [2024-11-20 11:44:52.990007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:47.250 [2024-11-20 11:44:52.990024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.250 [2024-11-20 11:44:52.990035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.250 [2024-11-20 11:44:52.990108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.250 [2024-11-20 11:44:52.990120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:47.250 [2024-11-20 11:44:52.990132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.250 [2024-11-20 11:44:52.990142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.250 [2024-11-20 11:44:52.990265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.250 [2024-11-20 11:44:52.990280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:47.250 [2024-11-20 11:44:52.990300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.250 [2024-11-20 11:44:52.990311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.250 [2024-11-20 11:44:52.990337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.250 [2024-11-20 11:44:52.990348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:47.250 [2024-11-20 11:44:52.990360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.250 [2024-11-20 11:44:52.990370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.509 [2024-11-20 11:44:53.116823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.509 [2024-11-20 11:44:53.116887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:47.509 [2024-11-20 11:44:53.116921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.509 [2024-11-20 11:44:53.116932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.509 [2024-11-20 11:44:53.217206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.509 [2024-11-20 11:44:53.217268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:47.509 [2024-11-20 11:44:53.217287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.509 [2024-11-20 11:44:53.217298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.509 [2024-11-20 11:44:53.217425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.509 [2024-11-20 11:44:53.217439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:47.509 [2024-11-20 11:44:53.217453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.509 [2024-11-20 11:44:53.217467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.509 [2024-11-20 11:44:53.217556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.509 [2024-11-20 11:44:53.217568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:47.509 [2024-11-20 11:44:53.217582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.509 [2024-11-20 11:44:53.217592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.509 [2024-11-20 11:44:53.217723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.509 [2024-11-20 11:44:53.217737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:47.509 [2024-11-20 11:44:53.217756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.509 [2024-11-20 11:44:53.217766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.509 [2024-11-20 11:44:53.217812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.509 [2024-11-20 11:44:53.217824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:47.509 [2024-11-20 11:44:53.217837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.509 [2024-11-20 11:44:53.217847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.509 [2024-11-20 11:44:53.217890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.509 [2024-11-20 11:44:53.217901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:47.509 [2024-11-20 11:44:53.217914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.509 [2024-11-20 11:44:53.217925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.509 [2024-11-20 11:44:53.217980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.509 [2024-11-20 11:44:53.217992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:47.509 [2024-11-20 11:44:53.218005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.509 [2024-11-20 11:44:53.218015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.509 [2024-11-20 11:44:53.218151] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 550.125 ms, result 0 00:30:47.509 true 00:30:47.509 11:44:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81604 00:30:47.509 11:44:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81604 00:30:47.509 11:44:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:30:47.768 [2024-11-20 11:44:53.333726] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:30:47.768 [2024-11-20 11:44:53.334264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82425 ] 00:30:47.768 [2024-11-20 11:44:53.504202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.026 [2024-11-20 11:44:53.615586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.401  [2024-11-20T11:44:56.099Z] Copying: 197/1024 [MB] (197 MBps) [2024-11-20T11:44:57.036Z] Copying: 398/1024 [MB] (200 MBps) [2024-11-20T11:44:57.971Z] Copying: 598/1024 [MB] (200 MBps) [2024-11-20T11:44:59.352Z] Copying: 798/1024 [MB] (199 MBps) [2024-11-20T11:44:59.352Z] Copying: 998/1024 [MB] (200 MBps) [2024-11-20T11:45:00.284Z] Copying: 1024/1024 [MB] (average 199 MBps) 00:30:54.522 00:30:54.780 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81604 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:30:54.780 11:45:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:54.780 [2024-11-20 11:45:00.409272] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:30:54.780 [2024-11-20 11:45:00.410213] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82499 ] 00:30:55.038 [2024-11-20 11:45:00.604045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.038 [2024-11-20 11:45:00.747836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.604 [2024-11-20 11:45:01.145814] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:55.604 [2024-11-20 11:45:01.145883] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:55.604 [2024-11-20 11:45:01.213495] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:55.604 [2024-11-20 11:45:01.213950] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:55.604 [2024-11-20 11:45:01.214285] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:55.863 [2024-11-20 11:45:01.486177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.863 [2024-11-20 11:45:01.486244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:55.863 [2024-11-20 11:45:01.486261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:55.863 [2024-11-20 11:45:01.486272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.863 [2024-11-20 11:45:01.486333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.863 [2024-11-20 11:45:01.486345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:55.863 [2024-11-20 11:45:01.486357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:55.863 [2024-11-20 11:45:01.486367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.863 [2024-11-20 11:45:01.486389] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:55.863 [2024-11-20 11:45:01.487437] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:55.863 [2024-11-20 11:45:01.487465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.863 [2024-11-20 11:45:01.487490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:55.863 [2024-11-20 11:45:01.487501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.080 ms 00:30:55.863 [2024-11-20 11:45:01.487512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.863 [2024-11-20 11:45:01.490031] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:55.863 [2024-11-20 11:45:01.509513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.863 [2024-11-20 11:45:01.509555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:55.863 [2024-11-20 11:45:01.509571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.482 ms 00:30:55.863 [2024-11-20 11:45:01.509583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.863 [2024-11-20 11:45:01.509651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.863 [2024-11-20 11:45:01.509665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:55.863 [2024-11-20 11:45:01.509677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:55.863 [2024-11-20 11:45:01.509687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.863 [2024-11-20 11:45:01.522596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.863 [2024-11-20 11:45:01.522625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:55.863 [2024-11-20 11:45:01.522638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.834 ms 00:30:55.863 [2024-11-20 11:45:01.522649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.863 [2024-11-20 11:45:01.522740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.864 [2024-11-20 11:45:01.522758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:55.864 [2024-11-20 11:45:01.522770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:30:55.864 [2024-11-20 11:45:01.522780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.864 [2024-11-20 11:45:01.522839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.864 [2024-11-20 11:45:01.522856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:55.864 [2024-11-20 11:45:01.522867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:55.864 [2024-11-20 11:45:01.522878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.864 [2024-11-20 11:45:01.522907] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:55.864 [2024-11-20 11:45:01.528611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.864 [2024-11-20 11:45:01.528841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:55.864 [2024-11-20 11:45:01.528864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.714 ms 00:30:55.864 [2024-11-20 11:45:01.528875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.864 [2024-11-20 11:45:01.528912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.864 [2024-11-20 11:45:01.528924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:55.864 [2024-11-20 11:45:01.528935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:55.864 [2024-11-20 11:45:01.528946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.864 [2024-11-20 11:45:01.528985] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:55.864 [2024-11-20 11:45:01.529019] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:55.864 [2024-11-20 11:45:01.529057] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:55.864 [2024-11-20 11:45:01.529085] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:55.864 [2024-11-20 11:45:01.529181] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:55.864 [2024-11-20 11:45:01.529196] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:55.864 [2024-11-20 11:45:01.529211] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:55.864 [2024-11-20 11:45:01.529225] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:55.864 [2024-11-20 11:45:01.529242] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:55.864 [2024-11-20 11:45:01.529254] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:55.864 [2024-11-20 11:45:01.529265] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:55.864 [2024-11-20 11:45:01.529276] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:55.864 [2024-11-20 11:45:01.529288] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:55.864 [2024-11-20 11:45:01.529299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.864 [2024-11-20 11:45:01.529310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:55.864 [2024-11-20 11:45:01.529321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:30:55.864 [2024-11-20 11:45:01.529332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.864 [2024-11-20 11:45:01.529405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.864 [2024-11-20 11:45:01.529421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:55.864 [2024-11-20 11:45:01.529432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:55.864 [2024-11-20 11:45:01.529443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.864 [2024-11-20 11:45:01.529560] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:55.864 [2024-11-20 11:45:01.529577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:55.864 [2024-11-20 11:45:01.529590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:55.864 [2024-11-20 11:45:01.529601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:55.864 [2024-11-20 11:45:01.529612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:55.864 [2024-11-20 11:45:01.529623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:55.864 [2024-11-20 11:45:01.529633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:55.864 [2024-11-20 11:45:01.529643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:55.864 [2024-11-20 11:45:01.529654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:55.864 [2024-11-20 11:45:01.529664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:55.864 [2024-11-20 11:45:01.529676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:55.864 [2024-11-20 11:45:01.529697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:55.864 [2024-11-20 11:45:01.529707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:55.864 [2024-11-20 11:45:01.529717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:55.864 [2024-11-20 11:45:01.529728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:55.864 [2024-11-20 11:45:01.529738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:55.864 [2024-11-20 11:45:01.529750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:55.864 [2024-11-20 11:45:01.529760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:55.864 [2024-11-20 11:45:01.529770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:55.864 [2024-11-20 11:45:01.529780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:55.864 [2024-11-20 11:45:01.529789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:55.864 [2024-11-20 11:45:01.529798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:55.864 [2024-11-20 11:45:01.529807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:55.864 [2024-11-20 11:45:01.529816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:55.864 [2024-11-20 11:45:01.529825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:55.864 [2024-11-20 11:45:01.529835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:55.864 [2024-11-20 11:45:01.529844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:55.864 [2024-11-20 11:45:01.529853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:55.864 [2024-11-20 11:45:01.529862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:55.864 [2024-11-20 11:45:01.529872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:55.864 [2024-11-20 11:45:01.529881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:55.864 [2024-11-20 11:45:01.529890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:55.864 [2024-11-20 11:45:01.529899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:55.864 [2024-11-20 11:45:01.529909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:55.864 [2024-11-20 11:45:01.529918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:55.864 [2024-11-20 11:45:01.529927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:55.864 [2024-11-20 11:45:01.529937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:55.864 [2024-11-20 11:45:01.529946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:55.864 [2024-11-20 11:45:01.529955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:55.865 [2024-11-20 11:45:01.529964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:55.865 [2024-11-20 11:45:01.529972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:55.865 [2024-11-20 11:45:01.529981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:55.865 [2024-11-20 11:45:01.529992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:55.865 [2024-11-20 11:45:01.530001] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:55.865 [2024-11-20 11:45:01.530011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:55.865 [2024-11-20 11:45:01.530022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:55.865 [2024-11-20 11:45:01.530036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:55.865 [2024-11-20 11:45:01.530046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:55.865 [2024-11-20 11:45:01.530056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:55.865 [2024-11-20 11:45:01.530065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:55.865 [2024-11-20 11:45:01.530074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:55.865 [2024-11-20 11:45:01.530084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:55.865 [2024-11-20 11:45:01.530093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:55.865 [2024-11-20 11:45:01.530115] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:55.865 [2024-11-20 11:45:01.530127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:55.865 [2024-11-20 11:45:01.530139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:55.865 [2024-11-20 11:45:01.530149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:55.865 [2024-11-20 11:45:01.530160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:55.865 [2024-11-20 11:45:01.530170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:55.865 [2024-11-20 11:45:01.530180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:55.865 [2024-11-20 11:45:01.530190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:55.865 [2024-11-20 11:45:01.530201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:55.865 [2024-11-20 11:45:01.530211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:55.865 [2024-11-20 11:45:01.530222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:55.865 [2024-11-20 11:45:01.530232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:55.865 [2024-11-20 11:45:01.530243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:55.865 [2024-11-20 11:45:01.530253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:55.865 [2024-11-20 11:45:01.530262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:55.865 [2024-11-20 11:45:01.530272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:55.865 [2024-11-20 11:45:01.530282] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:55.865 [2024-11-20 11:45:01.530293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:55.865 [2024-11-20 11:45:01.530304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:55.865 [2024-11-20 11:45:01.530314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:55.865 [2024-11-20 11:45:01.530323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:55.865 [2024-11-20 11:45:01.530334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:55.865 [2024-11-20 11:45:01.530344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.865 [2024-11-20 11:45:01.530354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:55.865 [2024-11-20 11:45:01.530365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:30:55.865 [2024-11-20 11:45:01.530375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.865 [2024-11-20 11:45:01.579721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.865 [2024-11-20 11:45:01.579983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:55.865 [2024-11-20 11:45:01.580009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.293 ms 00:30:55.865 [2024-11-20 11:45:01.580022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:55.865 [2024-11-20 11:45:01.580138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:55.865 [2024-11-20 11:45:01.580157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:55.865 [2024-11-20 11:45:01.580170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:30:55.865 [2024-11-20 11:45:01.580181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.125 [2024-11-20 11:45:01.645917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.125 [2024-11-20 11:45:01.645961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:56.125 [2024-11-20 11:45:01.645976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.621 ms 00:30:56.125 [2024-11-20 11:45:01.645992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.125 [2024-11-20 11:45:01.646055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.125 [2024-11-20 11:45:01.646067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:56.125 [2024-11-20 11:45:01.646080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:56.125 [2024-11-20 11:45:01.646090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.125 [2024-11-20 11:45:01.647133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.125 [2024-11-20 11:45:01.647233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:56.125 [2024-11-20 11:45:01.647311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:30:56.125 [2024-11-20 11:45:01.647351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.125 [2024-11-20 11:45:01.647547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.125 [2024-11-20 11:45:01.647726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:56.125 [2024-11-20 11:45:01.647766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:30:56.125 [2024-11-20 11:45:01.647798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.125 [2024-11-20 11:45:01.671894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.125 [2024-11-20 11:45:01.672047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:56.125 [2024-11-20 11:45:01.672069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.041 ms 00:30:56.125 [2024-11-20 11:45:01.672083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.125 [2024-11-20 11:45:01.692860] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:56.125 [2024-11-20 11:45:01.692899] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:56.125 [2024-11-20 11:45:01.692915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.125 [2024-11-20 11:45:01.692926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:56.125 [2024-11-20 11:45:01.692938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.683 ms 00:30:56.125 [2024-11-20 11:45:01.692948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.125 [2024-11-20 11:45:01.723162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.125 [2024-11-20 11:45:01.723218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:56.125 [2024-11-20 11:45:01.723245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.170 ms 00:30:56.125 [2024-11-20 11:45:01.723258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.125 [2024-11-20 11:45:01.740992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.125 [2024-11-20 11:45:01.741175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:56.125 [2024-11-20 11:45:01.741197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.691 ms 00:30:56.125 [2024-11-20 11:45:01.741208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.125 [2024-11-20 11:45:01.758853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.125 [2024-11-20 11:45:01.758888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:56.125 [2024-11-20 11:45:01.758901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.586 ms 00:30:56.125 [2024-11-20 11:45:01.758911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.125 [2024-11-20 11:45:01.759723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.125 [2024-11-20 11:45:01.759748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:56.125 [2024-11-20 11:45:01.759761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:30:56.125 [2024-11-20 11:45:01.759772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.125 [2024-11-20 11:45:01.857929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.125 [2024-11-20 11:45:01.858021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:56.125 [2024-11-20 11:45:01.858041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.132 ms 00:30:56.125 [2024-11-20 11:45:01.858054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.125 [2024-11-20 11:45:01.868820] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:56.125 [2024-11-20 11:45:01.873524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.125 [2024-11-20 11:45:01.873555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:56.125 [2024-11-20 11:45:01.873571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.405 ms 00:30:56.125 [2024-11-20 11:45:01.873583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.125 [2024-11-20 11:45:01.873727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.125 [2024-11-20 11:45:01.873742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:56.125 [2024-11-20 11:45:01.873754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:56.125 [2024-11-20 11:45:01.873765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.125 [2024-11-20 11:45:01.873857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.125 [2024-11-20 11:45:01.873871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:56.125 [2024-11-20 11:45:01.873883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:30:56.125 [2024-11-20 11:45:01.873893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.125 [2024-11-20 11:45:01.873918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.125 [2024-11-20 11:45:01.873935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:56.125 [2024-11-20 11:45:01.873946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:56.125 [2024-11-20 11:45:01.873957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.125 [2024-11-20 11:45:01.874000] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:56.125 [2024-11-20 11:45:01.874014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.125 [2024-11-20 11:45:01.874025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:56.125 [2024-11-20 11:45:01.874037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:56.125 [2024-11-20 11:45:01.874047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.385 [2024-11-20 11:45:01.913246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.385 [2024-11-20 11:45:01.913319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:56.385 [2024-11-20 11:45:01.913338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.170 ms 00:30:56.385 [2024-11-20 11:45:01.913350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.385 [2024-11-20 11:45:01.913448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.385 [2024-11-20 11:45:01.913463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:56.385 [2024-11-20 11:45:01.913503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:30:56.385 [2024-11-20 11:45:01.913516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.385 [2024-11-20 11:45:01.915173] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 428.451 ms, result 0 00:30:57.320  [2024-11-20T11:45:04.019Z] Copying: 33/1024 [MB] (33 MBps) [2024-11-20T11:45:04.956Z] Copying: 65/1024 [MB] (32 MBps) [2024-11-20T11:45:06.334Z] Copying: 97/1024 [MB] (31 MBps) [2024-11-20T11:45:07.271Z] Copying: 127/1024 [MB] (30 MBps) [2024-11-20T11:45:08.207Z] Copying: 158/1024 [MB] (30 MBps) [2024-11-20T11:45:09.223Z] Copying: 189/1024 [MB] (31 MBps) [2024-11-20T11:45:10.159Z] Copying: 220/1024 [MB] (31 MBps) [2024-11-20T11:45:11.095Z] Copying: 252/1024 [MB] (31 MBps) [2024-11-20T11:45:12.032Z] Copying: 284/1024 [MB] (32 MBps) [2024-11-20T11:45:12.968Z] Copying: 314/1024 [MB] (30 MBps) [2024-11-20T11:45:14.346Z] Copying: 340/1024 [MB] (26 MBps) [2024-11-20T11:45:15.279Z] Copying: 369/1024 [MB] (28 MBps) [2024-11-20T11:45:16.211Z] Copying: 398/1024 [MB] (28 MBps) [2024-11-20T11:45:17.144Z] Copying: 426/1024 [MB] (28 MBps) [2024-11-20T11:45:18.082Z] Copying: 454/1024 [MB] (27 MBps) [2024-11-20T11:45:19.018Z] Copying: 481/1024 [MB] (26 MBps) [2024-11-20T11:45:19.954Z] Copying: 508/1024 [MB] (27 MBps) [2024-11-20T11:45:21.330Z] Copying: 535/1024 [MB] (27 MBps) [2024-11-20T11:45:22.265Z] Copying: 565/1024 [MB] (29 MBps) [2024-11-20T11:45:23.201Z] Copying: 594/1024 [MB] (29 MBps) [2024-11-20T11:45:24.166Z] Copying: 624/1024 [MB] (29 MBps) [2024-11-20T11:45:25.100Z] Copying: 652/1024 [MB] (28 MBps) [2024-11-20T11:45:26.036Z] Copying: 681/1024 [MB] (29 MBps) [2024-11-20T11:45:26.972Z] Copying: 711/1024 [MB] (29 MBps) [2024-11-20T11:45:28.349Z] Copying: 740/1024 [MB] (29 MBps) [2024-11-20T11:45:29.289Z] Copying: 769/1024 [MB] (29 MBps) [2024-11-20T11:45:30.225Z] Copying: 797/1024 [MB] (27 MBps) [2024-11-20T11:45:31.162Z] Copying: 827/1024 [MB] (29 MBps) [2024-11-20T11:45:32.098Z] Copying: 856/1024 [MB] (29 MBps) [2024-11-20T11:45:33.035Z] Copying: 885/1024 [MB] (29 MBps) [2024-11-20T11:45:33.971Z] Copying: 914/1024 [MB] (29 MBps) [2024-11-20T11:45:35.347Z] Copying: 944/1024 [MB] (30 MBps) [2024-11-20T11:45:36.285Z] Copying: 974/1024 [MB] (29 MBps) [2024-11-20T11:45:37.222Z] Copying: 1003/1024 [MB] (28 MBps) [2024-11-20T11:45:37.222Z] Copying: 1023/1024 [MB] (20 MBps) [2024-11-20T11:45:37.222Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-20 11:45:37.152539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.460 [2024-11-20 11:45:37.152617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:31.460 [2024-11-20 11:45:37.152644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:31.460 [2024-11-20 11:45:37.152660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.460 [2024-11-20 11:45:37.155510] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:31.460 [2024-11-20 11:45:37.165698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.460 [2024-11-20 11:45:37.165748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:31.460 [2024-11-20 11:45:37.165771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.129 ms 00:31:31.460 [2024-11-20 11:45:37.165788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.460 [2024-11-20 11:45:37.180305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.460 [2024-11-20 11:45:37.180363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:31.460 [2024-11-20 11:45:37.180387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.952 ms 00:31:31.460 [2024-11-20 11:45:37.180404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.460 [2024-11-20 11:45:37.204447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.460 [2024-11-20 11:45:37.204509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:31.460 [2024-11-20 11:45:37.204530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.015 ms 00:31:31.460 [2024-11-20 11:45:37.204548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.460 [2024-11-20 11:45:37.212859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.460 [2024-11-20 11:45:37.212915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:31.460 [2024-11-20 11:45:37.212945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.264 ms 00:31:31.460 [2024-11-20 11:45:37.212961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.719 [2024-11-20 11:45:37.282523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.719 [2024-11-20 11:45:37.282885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:31.719 [2024-11-20 11:45:37.282929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.474 ms 00:31:31.719 [2024-11-20 11:45:37.282950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.719 [2024-11-20 11:45:37.319400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.719 [2024-11-20 11:45:37.319496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:31.719 [2024-11-20 11:45:37.319527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.372 ms 00:31:31.719 [2024-11-20 11:45:37.319549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.719 [2024-11-20 11:45:37.388542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.720 [2024-11-20 11:45:37.388626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:31.720 [2024-11-20 11:45:37.388655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.905 ms 00:31:31.720 [2024-11-20 11:45:37.388689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.720 [2024-11-20 11:45:37.432184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.720 [2024-11-20 11:45:37.432252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:31.720 [2024-11-20 11:45:37.432272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.459 ms 00:31:31.720 [2024-11-20 11:45:37.432285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.720 [2024-11-20 11:45:37.470599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.720 [2024-11-20 11:45:37.470666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:31.720 [2024-11-20 11:45:37.470686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.255 ms 00:31:31.720 [2024-11-20 11:45:37.470699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.979 [2024-11-20 11:45:37.508100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.979 [2024-11-20 11:45:37.508150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:31.979 [2024-11-20 11:45:37.508167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.347 ms 00:31:31.979 [2024-11-20 11:45:37.508180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.979 [2024-11-20 11:45:37.544792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.979 [2024-11-20 11:45:37.544847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:31.979 [2024-11-20 11:45:37.544864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.508 ms 00:31:31.979 [2024-11-20 11:45:37.544875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.979 [2024-11-20 11:45:37.544925] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:31.979 [2024-11-20 11:45:37.544947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 64000 / 261120 wr_cnt: 1 state: open 00:31:31.979 [2024-11-20 11:45:37.544964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:31.979 [2024-11-20 11:45:37.544977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:31.979 [2024-11-20 11:45:37.544990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:31.979 [2024-11-20 11:45:37.545002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:31.979 [2024-11-20 11:45:37.545015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:31.979 [2024-11-20 11:45:37.545027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:31.979 [2024-11-20 11:45:37.545039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:31.979 [2024-11-20 11:45:37.545051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:31.979 [2024-11-20 11:45:37.545070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:31.979 [2024-11-20 11:45:37.545100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:31.979 [2024-11-20 11:45:37.545114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:31.979 [2024-11-20 11:45:37.545127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:31.979 [2024-11-20 11:45:37.545140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:31.979 [2024-11-20 11:45:37.545153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:31.979 [2024-11-20 11:45:37.545166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:31.979 [2024-11-20 11:45:37.545179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:31.979 [2024-11-20 11:45:37.545192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:31.979 [2024-11-20 11:45:37.545205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:31.979 [2024-11-20 11:45:37.545218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.545988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:31.980 [2024-11-20 11:45:37.546335] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:31.980 [2024-11-20 11:45:37.546347] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bf3c597d-5cce-48af-a3f6-c932c9b7fc69 00:31:31.980 [2024-11-20 11:45:37.546361] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 64000 00:31:31.980 [2024-11-20 11:45:37.546381] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 64960 00:31:31.980 [2024-11-20 11:45:37.546409] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 64000 00:31:31.980 [2024-11-20 11:45:37.546421] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0150 00:31:31.980 [2024-11-20 11:45:37.546433] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:31.980 [2024-11-20 11:45:37.546444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:31.980 [2024-11-20 11:45:37.546456] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:31.980 [2024-11-20 11:45:37.546467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:31.980 [2024-11-20 11:45:37.546477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:31.980 [2024-11-20 11:45:37.546504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.980 [2024-11-20 11:45:37.546516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:31.980 [2024-11-20 11:45:37.546528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.581 ms 00:31:31.980 [2024-11-20 11:45:37.546540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.980 [2024-11-20 11:45:37.567547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.980 [2024-11-20 11:45:37.567598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:31.980 [2024-11-20 11:45:37.567614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.962 ms 00:31:31.980 [2024-11-20 11:45:37.567627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.980 [2024-11-20 11:45:37.568225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.980 [2024-11-20 11:45:37.568245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:31.980 [2024-11-20 11:45:37.568259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:31:31.980 [2024-11-20 11:45:37.568271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.980 [2024-11-20 11:45:37.624255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:31.980 [2024-11-20 11:45:37.624318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:31.980 [2024-11-20 11:45:37.624336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:31.980 [2024-11-20 11:45:37.624349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.980 [2024-11-20 11:45:37.624449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:31.980 [2024-11-20 11:45:37.624463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:31.980 [2024-11-20 11:45:37.624491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:31.980 [2024-11-20 11:45:37.624504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.980 [2024-11-20 11:45:37.624647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:31.980 [2024-11-20 11:45:37.624663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:31.980 [2024-11-20 11:45:37.624676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:31.980 [2024-11-20 11:45:37.624688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.980 [2024-11-20 11:45:37.624710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:31.980 [2024-11-20 11:45:37.624723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:31.980 [2024-11-20 11:45:37.624737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:31.980 [2024-11-20 11:45:37.624749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.239 [2024-11-20 11:45:37.761197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.239 [2024-11-20 11:45:37.761281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:32.239 [2024-11-20 11:45:37.761303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.239 [2024-11-20 11:45:37.761316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.239 [2024-11-20 11:45:37.867315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.239 [2024-11-20 11:45:37.867397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:32.239 [2024-11-20 11:45:37.867417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.239 [2024-11-20 11:45:37.867429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.239 [2024-11-20 11:45:37.867603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.239 [2024-11-20 11:45:37.867621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:32.239 [2024-11-20 11:45:37.867635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.239 [2024-11-20 11:45:37.867647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.239 [2024-11-20 11:45:37.867703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.239 [2024-11-20 11:45:37.867717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:32.239 [2024-11-20 11:45:37.867731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.239 [2024-11-20 11:45:37.867743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.239 [2024-11-20 11:45:37.867869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.239 [2024-11-20 11:45:37.867892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:32.239 [2024-11-20 11:45:37.867904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.239 [2024-11-20 11:45:37.867916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.239 [2024-11-20 11:45:37.867963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.239 [2024-11-20 11:45:37.867978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:32.239 [2024-11-20 11:45:37.867990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.239 [2024-11-20 11:45:37.868002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.239 [2024-11-20 11:45:37.868055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.239 [2024-11-20 11:45:37.868075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:32.239 [2024-11-20 11:45:37.868087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.239 [2024-11-20 11:45:37.868099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.239 [2024-11-20 11:45:37.868156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.239 [2024-11-20 11:45:37.868170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:32.239 [2024-11-20 11:45:37.868183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.239 [2024-11-20 11:45:37.868194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.239 [2024-11-20 11:45:37.868394] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 719.190 ms, result 0 00:31:33.618 00:31:33.618 00:31:33.618 11:45:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:31:35.564 11:45:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:35.564 [2024-11-20 11:45:41.088350] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:31:35.564 [2024-11-20 11:45:41.088488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82897 ] 00:31:35.564 [2024-11-20 11:45:41.270945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.823 [2024-11-20 11:45:41.438353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.083 [2024-11-20 11:45:41.798381] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:36.083 [2024-11-20 11:45:41.798454] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:36.343 [2024-11-20 11:45:41.960045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.343 [2024-11-20 11:45:41.960101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:36.343 [2024-11-20 11:45:41.960123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:36.343 [2024-11-20 11:45:41.960134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.343 [2024-11-20 11:45:41.960182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.343 [2024-11-20 11:45:41.960194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:36.343 [2024-11-20 11:45:41.960208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:36.343 [2024-11-20 11:45:41.960218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.343 [2024-11-20 11:45:41.960240] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:36.343 [2024-11-20 11:45:41.961302] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:36.343 [2024-11-20 11:45:41.961339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.343 [2024-11-20 11:45:41.961351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:36.343 [2024-11-20 11:45:41.961362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:31:36.343 [2024-11-20 11:45:41.961372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.343 [2024-11-20 11:45:41.962834] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:36.343 [2024-11-20 11:45:41.982589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.343 [2024-11-20 11:45:41.982629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:36.343 [2024-11-20 11:45:41.982644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.755 ms 00:31:36.343 [2024-11-20 11:45:41.982655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.343 [2024-11-20 11:45:41.982726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.343 [2024-11-20 11:45:41.982739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:36.343 [2024-11-20 11:45:41.982751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:31:36.343 [2024-11-20 11:45:41.982761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.343 [2024-11-20 11:45:41.989630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.343 [2024-11-20 11:45:41.989658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:36.343 [2024-11-20 11:45:41.989671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.795 ms 00:31:36.343 [2024-11-20 11:45:41.989696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.343 [2024-11-20 11:45:41.989782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.343 [2024-11-20 11:45:41.989795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:36.343 [2024-11-20 11:45:41.989807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:31:36.343 [2024-11-20 11:45:41.989817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.343 [2024-11-20 11:45:41.989877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.343 [2024-11-20 11:45:41.989889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:36.344 [2024-11-20 11:45:41.989900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:36.344 [2024-11-20 11:45:41.989910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.344 [2024-11-20 11:45:41.989936] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:36.344 [2024-11-20 11:45:41.994908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.344 [2024-11-20 11:45:41.994941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:36.344 [2024-11-20 11:45:41.994953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.978 ms 00:31:36.344 [2024-11-20 11:45:41.994967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.344 [2024-11-20 11:45:41.994997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.344 [2024-11-20 11:45:41.995008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:36.344 [2024-11-20 11:45:41.995019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:36.344 [2024-11-20 11:45:41.995029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.344 [2024-11-20 11:45:41.995084] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:36.344 [2024-11-20 11:45:41.995107] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:36.344 [2024-11-20 11:45:41.995142] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:36.344 [2024-11-20 11:45:41.995164] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:36.344 [2024-11-20 11:45:41.995253] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:36.344 [2024-11-20 11:45:41.995266] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:36.344 [2024-11-20 11:45:41.995279] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:36.344 [2024-11-20 11:45:41.995292] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:36.344 [2024-11-20 11:45:41.995304] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:36.344 [2024-11-20 11:45:41.995315] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:36.344 [2024-11-20 11:45:41.995325] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:36.344 [2024-11-20 11:45:41.995335] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:36.344 [2024-11-20 11:45:41.995345] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:36.344 [2024-11-20 11:45:41.995359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.344 [2024-11-20 11:45:41.995369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:36.344 [2024-11-20 11:45:41.995379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:31:36.344 [2024-11-20 11:45:41.995389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.344 [2024-11-20 11:45:41.995462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.344 [2024-11-20 11:45:41.995491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:36.344 [2024-11-20 11:45:41.995503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:31:36.344 [2024-11-20 11:45:41.995512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.344 [2024-11-20 11:45:41.995608] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:36.344 [2024-11-20 11:45:41.995641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:36.344 [2024-11-20 11:45:41.995653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:36.344 [2024-11-20 11:45:41.995665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.344 [2024-11-20 11:45:41.995675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:36.344 [2024-11-20 11:45:41.995684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:36.344 [2024-11-20 11:45:41.995694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:36.344 [2024-11-20 11:45:41.995704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:36.344 [2024-11-20 11:45:41.995714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:36.344 [2024-11-20 11:45:41.995724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:36.344 [2024-11-20 11:45:41.995733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:36.344 [2024-11-20 11:45:41.995744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:36.344 [2024-11-20 11:45:41.995753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:36.344 [2024-11-20 11:45:41.995763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:36.344 [2024-11-20 11:45:41.995775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:36.344 [2024-11-20 11:45:41.995793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.344 [2024-11-20 11:45:41.995802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:36.344 [2024-11-20 11:45:41.995812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:36.344 [2024-11-20 11:45:41.995822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.344 [2024-11-20 11:45:41.995831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:36.344 [2024-11-20 11:45:41.995840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:36.344 [2024-11-20 11:45:41.995850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:36.344 [2024-11-20 11:45:41.995859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:36.344 [2024-11-20 11:45:41.995868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:36.344 [2024-11-20 11:45:41.995878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:36.344 [2024-11-20 11:45:41.995888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:36.344 [2024-11-20 11:45:41.995897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:36.344 [2024-11-20 11:45:41.995906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:36.344 [2024-11-20 11:45:41.995915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:36.344 [2024-11-20 11:45:41.995924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:36.344 [2024-11-20 11:45:41.995933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:36.344 [2024-11-20 11:45:41.995942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:36.344 [2024-11-20 11:45:41.995951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:36.344 [2024-11-20 11:45:41.995960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:36.344 [2024-11-20 11:45:41.995969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:36.344 [2024-11-20 11:45:41.995978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:36.344 [2024-11-20 11:45:41.995987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:36.344 [2024-11-20 11:45:41.995996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:36.344 [2024-11-20 11:45:41.996005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:36.344 [2024-11-20 11:45:41.996015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.344 [2024-11-20 11:45:41.996024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:36.344 [2024-11-20 11:45:41.996033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:36.344 [2024-11-20 11:45:41.996042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.344 [2024-11-20 11:45:41.996051] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:36.344 [2024-11-20 11:45:41.996061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:36.344 [2024-11-20 11:45:41.996071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:36.344 [2024-11-20 11:45:41.996081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.344 [2024-11-20 11:45:41.996091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:36.344 [2024-11-20 11:45:41.996101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:36.344 [2024-11-20 11:45:41.996110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:36.344 [2024-11-20 11:45:41.996120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:36.344 [2024-11-20 11:45:41.996129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:36.344 [2024-11-20 11:45:41.996138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:36.344 [2024-11-20 11:45:41.996149] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:36.344 [2024-11-20 11:45:41.996162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:36.344 [2024-11-20 11:45:41.996174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:36.344 [2024-11-20 11:45:41.996185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:36.344 [2024-11-20 11:45:41.996195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:36.344 [2024-11-20 11:45:41.996205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:36.344 [2024-11-20 11:45:41.996216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:36.344 [2024-11-20 11:45:41.996226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:36.344 [2024-11-20 11:45:41.996237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:36.344 [2024-11-20 11:45:41.996247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:36.344 [2024-11-20 11:45:41.996258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:36.344 [2024-11-20 11:45:41.996268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:36.344 [2024-11-20 11:45:41.996278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:36.344 [2024-11-20 11:45:41.996288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:36.344 [2024-11-20 11:45:41.996298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:36.345 [2024-11-20 11:45:41.996309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:36.345 [2024-11-20 11:45:41.996319] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:36.345 [2024-11-20 11:45:41.996334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:36.345 [2024-11-20 11:45:41.996346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:36.345 [2024-11-20 11:45:41.996356] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:36.345 [2024-11-20 11:45:41.996367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:36.345 [2024-11-20 11:45:41.996377] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:36.345 [2024-11-20 11:45:41.996388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.345 [2024-11-20 11:45:41.996399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:36.345 [2024-11-20 11:45:41.996409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.835 ms 00:31:36.345 [2024-11-20 11:45:41.996419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.345 [2024-11-20 11:45:42.039261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.345 [2024-11-20 11:45:42.039487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:36.345 [2024-11-20 11:45:42.039516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.793 ms 00:31:36.345 [2024-11-20 11:45:42.039530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.345 [2024-11-20 11:45:42.039644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.345 [2024-11-20 11:45:42.039657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:36.345 [2024-11-20 11:45:42.039669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:31:36.345 [2024-11-20 11:45:42.039680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.605 [2024-11-20 11:45:42.101835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.605 [2024-11-20 11:45:42.101879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:36.605 [2024-11-20 11:45:42.101894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.073 ms 00:31:36.605 [2024-11-20 11:45:42.101905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.605 [2024-11-20 11:45:42.101963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.605 [2024-11-20 11:45:42.101975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:36.605 [2024-11-20 11:45:42.101987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:36.605 [2024-11-20 11:45:42.102002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.605 [2024-11-20 11:45:42.102509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.605 [2024-11-20 11:45:42.102524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:36.605 [2024-11-20 11:45:42.102536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:31:36.605 [2024-11-20 11:45:42.102545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.605 [2024-11-20 11:45:42.102665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.605 [2024-11-20 11:45:42.102678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:36.605 [2024-11-20 11:45:42.102689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:31:36.605 [2024-11-20 11:45:42.102706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.605 [2024-11-20 11:45:42.122506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.605 [2024-11-20 11:45:42.122685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:36.605 [2024-11-20 11:45:42.122714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.777 ms 00:31:36.605 [2024-11-20 11:45:42.122725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.605 [2024-11-20 11:45:42.141851] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:36.605 [2024-11-20 11:45:42.141889] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:36.605 [2024-11-20 11:45:42.141905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.605 [2024-11-20 11:45:42.141932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:36.605 [2024-11-20 11:45:42.141944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.045 ms 00:31:36.605 [2024-11-20 11:45:42.141954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.605 [2024-11-20 11:45:42.172396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.605 [2024-11-20 11:45:42.172563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:36.605 [2024-11-20 11:45:42.172585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.395 ms 00:31:36.605 [2024-11-20 11:45:42.172613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.605 [2024-11-20 11:45:42.191769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.605 [2024-11-20 11:45:42.191932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:36.605 [2024-11-20 11:45:42.191954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.032 ms 00:31:36.605 [2024-11-20 11:45:42.191965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.605 [2024-11-20 11:45:42.210251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.605 [2024-11-20 11:45:42.210289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:36.605 [2024-11-20 11:45:42.210302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.246 ms 00:31:36.605 [2024-11-20 11:45:42.210313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.605 [2024-11-20 11:45:42.211234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.605 [2024-11-20 11:45:42.211266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:36.605 [2024-11-20 11:45:42.211278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:31:36.605 [2024-11-20 11:45:42.211293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.605 [2024-11-20 11:45:42.310287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.605 [2024-11-20 11:45:42.310345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:36.605 [2024-11-20 11:45:42.310370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.969 ms 00:31:36.605 [2024-11-20 11:45:42.310381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.605 [2024-11-20 11:45:42.321962] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:36.605 [2024-11-20 11:45:42.325204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.605 [2024-11-20 11:45:42.325362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:36.605 [2024-11-20 11:45:42.325385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.761 ms 00:31:36.606 [2024-11-20 11:45:42.325398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.606 [2024-11-20 11:45:42.325544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.606 [2024-11-20 11:45:42.325559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:36.606 [2024-11-20 11:45:42.325571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:36.606 [2024-11-20 11:45:42.325585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.606 [2024-11-20 11:45:42.326831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.606 [2024-11-20 11:45:42.326867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:36.606 [2024-11-20 11:45:42.326879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.195 ms 00:31:36.606 [2024-11-20 11:45:42.326890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.606 [2024-11-20 11:45:42.326925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.606 [2024-11-20 11:45:42.326937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:36.606 [2024-11-20 11:45:42.326948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:36.606 [2024-11-20 11:45:42.326957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.606 [2024-11-20 11:45:42.326996] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:36.606 [2024-11-20 11:45:42.327012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.606 [2024-11-20 11:45:42.327023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:36.606 [2024-11-20 11:45:42.327033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:31:36.606 [2024-11-20 11:45:42.327043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.865 [2024-11-20 11:45:42.364172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.865 [2024-11-20 11:45:42.364213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:36.865 [2024-11-20 11:45:42.364227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.109 ms 00:31:36.865 [2024-11-20 11:45:42.364244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.865 [2024-11-20 11:45:42.364322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.865 [2024-11-20 11:45:42.364336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:36.865 [2024-11-20 11:45:42.364348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:36.865 [2024-11-20 11:45:42.364357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.865 [2024-11-20 11:45:42.365546] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 404.962 ms, result 0 00:31:38.245  [2024-11-20T11:45:44.603Z] Copying: 1528/1048576 [kB] (1528 kBps) [2024-11-20T11:45:45.980Z] Copying: 4968/1048576 [kB] (3440 kBps) [2024-11-20T11:45:46.916Z] Copying: 30/1024 [MB] (25 MBps) [2024-11-20T11:45:47.853Z] Copying: 68/1024 [MB] (38 MBps) [2024-11-20T11:45:48.790Z] Copying: 106/1024 [MB] (38 MBps) [2024-11-20T11:45:49.728Z] Copying: 144/1024 [MB] (38 MBps) [2024-11-20T11:45:50.664Z] Copying: 182/1024 [MB] (37 MBps) [2024-11-20T11:45:51.601Z] Copying: 220/1024 [MB] (38 MBps) [2024-11-20T11:45:52.979Z] Copying: 258/1024 [MB] (38 MBps) [2024-11-20T11:45:53.916Z] Copying: 296/1024 [MB] (37 MBps) [2024-11-20T11:45:54.914Z] Copying: 333/1024 [MB] (37 MBps) [2024-11-20T11:45:55.852Z] Copying: 372/1024 [MB] (38 MBps) [2024-11-20T11:45:56.788Z] Copying: 410/1024 [MB] (38 MBps) [2024-11-20T11:45:57.723Z] Copying: 449/1024 [MB] (39 MBps) [2024-11-20T11:45:58.660Z] Copying: 488/1024 [MB] (38 MBps) [2024-11-20T11:45:59.598Z] Copying: 526/1024 [MB] (37 MBps) [2024-11-20T11:46:00.984Z] Copying: 564/1024 [MB] (38 MBps) [2024-11-20T11:46:01.919Z] Copying: 603/1024 [MB] (38 MBps) [2024-11-20T11:46:02.856Z] Copying: 642/1024 [MB] (39 MBps) [2024-11-20T11:46:03.794Z] Copying: 681/1024 [MB] (38 MBps) [2024-11-20T11:46:04.731Z] Copying: 719/1024 [MB] (38 MBps) [2024-11-20T11:46:05.667Z] Copying: 758/1024 [MB] (38 MBps) [2024-11-20T11:46:06.604Z] Copying: 796/1024 [MB] (38 MBps) [2024-11-20T11:46:07.980Z] Copying: 835/1024 [MB] (38 MBps) [2024-11-20T11:46:08.915Z] Copying: 874/1024 [MB] (39 MBps) [2024-11-20T11:46:09.850Z] Copying: 912/1024 [MB] (38 MBps) [2024-11-20T11:46:10.826Z] Copying: 951/1024 [MB] (38 MBps) [2024-11-20T11:46:11.762Z] Copying: 989/1024 [MB] (38 MBps) [2024-11-20T11:46:12.328Z] Copying: 1024/1024 [MB] (average 35 MBps)[2024-11-20 11:46:12.261590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.566 [2024-11-20 11:46:12.261656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:06.566 [2024-11-20 11:46:12.261702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:06.566 [2024-11-20 11:46:12.261719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.566 [2024-11-20 11:46:12.261761] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:06.566 [2024-11-20 11:46:12.268329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.566 [2024-11-20 11:46:12.268383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:06.566 [2024-11-20 11:46:12.268404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.538 ms 00:32:06.566 [2024-11-20 11:46:12.268421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.567 [2024-11-20 11:46:12.268746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.567 [2024-11-20 11:46:12.268841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:06.567 [2024-11-20 11:46:12.268871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:32:06.567 [2024-11-20 11:46:12.268888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.567 [2024-11-20 11:46:12.285702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.567 [2024-11-20 11:46:12.285760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:06.567 [2024-11-20 11:46:12.285781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.785 ms 00:32:06.567 [2024-11-20 11:46:12.285799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.567 [2024-11-20 11:46:12.294168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.567 [2024-11-20 11:46:12.294217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:06.567 [2024-11-20 11:46:12.294237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.324 ms 00:32:06.567 [2024-11-20 11:46:12.294262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.825 [2024-11-20 11:46:12.355161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.825 [2024-11-20 11:46:12.355228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:06.825 [2024-11-20 11:46:12.355251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.832 ms 00:32:06.825 [2024-11-20 11:46:12.355267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.825 [2024-11-20 11:46:12.387666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.825 [2024-11-20 11:46:12.387733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:06.825 [2024-11-20 11:46:12.387757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.336 ms 00:32:06.825 [2024-11-20 11:46:12.387773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.825 [2024-11-20 11:46:12.389817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.825 [2024-11-20 11:46:12.389866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:06.825 [2024-11-20 11:46:12.389887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.978 ms 00:32:06.825 [2024-11-20 11:46:12.389903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.825 [2024-11-20 11:46:12.449661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.825 [2024-11-20 11:46:12.449745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:06.825 [2024-11-20 11:46:12.449769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.721 ms 00:32:06.825 [2024-11-20 11:46:12.449785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.825 [2024-11-20 11:46:12.510634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.825 [2024-11-20 11:46:12.510886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:06.825 [2024-11-20 11:46:12.510938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.786 ms 00:32:06.825 [2024-11-20 11:46:12.510955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.825 [2024-11-20 11:46:12.553748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.825 [2024-11-20 11:46:12.553950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:06.825 [2024-11-20 11:46:12.553975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.730 ms 00:32:06.825 [2024-11-20 11:46:12.553987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.085 [2024-11-20 11:46:12.592339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.085 [2024-11-20 11:46:12.592380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:07.085 [2024-11-20 11:46:12.592394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.256 ms 00:32:07.085 [2024-11-20 11:46:12.592421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.085 [2024-11-20 11:46:12.592465] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:07.085 [2024-11-20 11:46:12.592505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:07.085 [2024-11-20 11:46:12.592520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:32:07.085 [2024-11-20 11:46:12.592533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.592996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.593007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.593018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.593029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.593041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.593061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.593073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.593084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.593095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.593107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.593119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.593130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.593142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.593153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.593165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.593176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.593187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:07.085 [2024-11-20 11:46:12.593199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:07.086 [2024-11-20 11:46:12.593730] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:07.086 [2024-11-20 11:46:12.593742] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bf3c597d-5cce-48af-a3f6-c932c9b7fc69 00:32:07.086 [2024-11-20 11:46:12.593754] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:32:07.086 [2024-11-20 11:46:12.593765] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 200640 00:32:07.086 [2024-11-20 11:46:12.593775] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 198656 00:32:07.086 [2024-11-20 11:46:12.593797] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0100 00:32:07.086 [2024-11-20 11:46:12.593807] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:07.086 [2024-11-20 11:46:12.593819] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:07.086 [2024-11-20 11:46:12.593829] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:07.086 [2024-11-20 11:46:12.593851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:07.086 [2024-11-20 11:46:12.593861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:07.086 [2024-11-20 11:46:12.593871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.086 [2024-11-20 11:46:12.593882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:07.086 [2024-11-20 11:46:12.593895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.407 ms 00:32:07.086 [2024-11-20 11:46:12.593906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.086 [2024-11-20 11:46:12.615652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.086 [2024-11-20 11:46:12.615801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:07.086 [2024-11-20 11:46:12.615839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.681 ms 00:32:07.086 [2024-11-20 11:46:12.615851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.086 [2024-11-20 11:46:12.616417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:07.086 [2024-11-20 11:46:12.616434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:07.086 [2024-11-20 11:46:12.616447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:32:07.086 [2024-11-20 11:46:12.616459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.086 [2024-11-20 11:46:12.671260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:07.086 [2024-11-20 11:46:12.671302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:07.086 [2024-11-20 11:46:12.671316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:07.086 [2024-11-20 11:46:12.671327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.086 [2024-11-20 11:46:12.671384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:07.086 [2024-11-20 11:46:12.671395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:07.086 [2024-11-20 11:46:12.671406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:07.086 [2024-11-20 11:46:12.671416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.086 [2024-11-20 11:46:12.671497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:07.086 [2024-11-20 11:46:12.671516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:07.086 [2024-11-20 11:46:12.671527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:07.086 [2024-11-20 11:46:12.671537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.086 [2024-11-20 11:46:12.671555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:07.086 [2024-11-20 11:46:12.671566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:07.086 [2024-11-20 11:46:12.671577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:07.086 [2024-11-20 11:46:12.671587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.086 [2024-11-20 11:46:12.802956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:07.086 [2024-11-20 11:46:12.803021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:07.086 [2024-11-20 11:46:12.803037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:07.086 [2024-11-20 11:46:12.803048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.345 [2024-11-20 11:46:12.909276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:07.345 [2024-11-20 11:46:12.909341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:07.345 [2024-11-20 11:46:12.909357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:07.345 [2024-11-20 11:46:12.909369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.345 [2024-11-20 11:46:12.909482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:07.345 [2024-11-20 11:46:12.909516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:07.345 [2024-11-20 11:46:12.909535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:07.345 [2024-11-20 11:46:12.909546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.345 [2024-11-20 11:46:12.909595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:07.345 [2024-11-20 11:46:12.909608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:07.345 [2024-11-20 11:46:12.909620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:07.345 [2024-11-20 11:46:12.909631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.345 [2024-11-20 11:46:12.909762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:07.345 [2024-11-20 11:46:12.909777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:07.345 [2024-11-20 11:46:12.909788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:07.345 [2024-11-20 11:46:12.909804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.345 [2024-11-20 11:46:12.909843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:07.345 [2024-11-20 11:46:12.909856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:07.345 [2024-11-20 11:46:12.909868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:07.345 [2024-11-20 11:46:12.909879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.345 [2024-11-20 11:46:12.909920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:07.345 [2024-11-20 11:46:12.909932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:07.345 [2024-11-20 11:46:12.909943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:07.345 [2024-11-20 11:46:12.909958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.345 [2024-11-20 11:46:12.910003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:07.345 [2024-11-20 11:46:12.910015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:07.345 [2024-11-20 11:46:12.910027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:07.345 [2024-11-20 11:46:12.910037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:07.345 [2024-11-20 11:46:12.910160] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 648.546 ms, result 0 00:32:08.280 00:32:08.280 00:32:08.280 11:46:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:10.182 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:10.182 11:46:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:10.182 [2024-11-20 11:46:15.933650] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:32:10.182 [2024-11-20 11:46:15.934093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83245 ] 00:32:10.440 [2024-11-20 11:46:16.135376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.699 [2024-11-20 11:46:16.308706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.958 [2024-11-20 11:46:16.682737] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:10.958 [2024-11-20 11:46:16.682810] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:11.217 [2024-11-20 11:46:16.845328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.217 [2024-11-20 11:46:16.845388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:11.217 [2024-11-20 11:46:16.845426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:11.217 [2024-11-20 11:46:16.845438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.217 [2024-11-20 11:46:16.845507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.217 [2024-11-20 11:46:16.845523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:11.217 [2024-11-20 11:46:16.845538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:32:11.217 [2024-11-20 11:46:16.845549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.217 [2024-11-20 11:46:16.845574] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:11.217 [2024-11-20 11:46:16.846599] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:11.217 [2024-11-20 11:46:16.846627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.217 [2024-11-20 11:46:16.846639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:11.217 [2024-11-20 11:46:16.846652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.058 ms 00:32:11.217 [2024-11-20 11:46:16.846662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.217 [2024-11-20 11:46:16.848145] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:11.217 [2024-11-20 11:46:16.868254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.217 [2024-11-20 11:46:16.868292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:11.217 [2024-11-20 11:46:16.868308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.108 ms 00:32:11.217 [2024-11-20 11:46:16.868318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.217 [2024-11-20 11:46:16.868388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.217 [2024-11-20 11:46:16.868401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:11.217 [2024-11-20 11:46:16.868413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:32:11.217 [2024-11-20 11:46:16.868423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.217 [2024-11-20 11:46:16.875449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.217 [2024-11-20 11:46:16.875492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:11.217 [2024-11-20 11:46:16.875505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.780 ms 00:32:11.217 [2024-11-20 11:46:16.875516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.217 [2024-11-20 11:46:16.875603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.217 [2024-11-20 11:46:16.875616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:11.217 [2024-11-20 11:46:16.875627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:32:11.217 [2024-11-20 11:46:16.875637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.217 [2024-11-20 11:46:16.875680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.217 [2024-11-20 11:46:16.875692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:11.217 [2024-11-20 11:46:16.875703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:11.217 [2024-11-20 11:46:16.875713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.217 [2024-11-20 11:46:16.875740] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:11.217 [2024-11-20 11:46:16.880763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.218 [2024-11-20 11:46:16.880889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:11.218 [2024-11-20 11:46:16.881018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.030 ms 00:32:11.218 [2024-11-20 11:46:16.881077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.218 [2024-11-20 11:46:16.881138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.218 [2024-11-20 11:46:16.881174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:11.218 [2024-11-20 11:46:16.881208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:11.218 [2024-11-20 11:46:16.881299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.218 [2024-11-20 11:46:16.881391] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:11.218 [2024-11-20 11:46:16.881444] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:11.218 [2024-11-20 11:46:16.881659] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:11.218 [2024-11-20 11:46:16.881725] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:11.218 [2024-11-20 11:46:16.881921] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:11.218 [2024-11-20 11:46:16.882064] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:11.218 [2024-11-20 11:46:16.882146] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:11.218 [2024-11-20 11:46:16.882165] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:11.218 [2024-11-20 11:46:16.882190] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:11.218 [2024-11-20 11:46:16.882202] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:11.218 [2024-11-20 11:46:16.882214] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:11.218 [2024-11-20 11:46:16.882225] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:11.218 [2024-11-20 11:46:16.882244] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:11.218 [2024-11-20 11:46:16.882261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.218 [2024-11-20 11:46:16.882273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:11.218 [2024-11-20 11:46:16.882284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:32:11.218 [2024-11-20 11:46:16.882295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.218 [2024-11-20 11:46:16.882387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.218 [2024-11-20 11:46:16.882399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:11.218 [2024-11-20 11:46:16.882410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:32:11.218 [2024-11-20 11:46:16.882421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.218 [2024-11-20 11:46:16.882645] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:11.218 [2024-11-20 11:46:16.882699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:11.218 [2024-11-20 11:46:16.882734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:11.218 [2024-11-20 11:46:16.882852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:11.218 [2024-11-20 11:46:16.882926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:11.218 [2024-11-20 11:46:16.882958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:11.218 [2024-11-20 11:46:16.882990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:11.218 [2024-11-20 11:46:16.883003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:11.218 [2024-11-20 11:46:16.883013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:11.218 [2024-11-20 11:46:16.883023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:11.218 [2024-11-20 11:46:16.883034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:11.218 [2024-11-20 11:46:16.883043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:11.218 [2024-11-20 11:46:16.883054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:11.218 [2024-11-20 11:46:16.883064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:11.218 [2024-11-20 11:46:16.883075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:11.218 [2024-11-20 11:46:16.883095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:11.218 [2024-11-20 11:46:16.883106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:11.218 [2024-11-20 11:46:16.883116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:11.218 [2024-11-20 11:46:16.883126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:11.218 [2024-11-20 11:46:16.883137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:11.218 [2024-11-20 11:46:16.883147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:11.218 [2024-11-20 11:46:16.883157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:11.218 [2024-11-20 11:46:16.883168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:11.218 [2024-11-20 11:46:16.883178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:11.218 [2024-11-20 11:46:16.883188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:11.218 [2024-11-20 11:46:16.883198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:11.218 [2024-11-20 11:46:16.883209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:11.218 [2024-11-20 11:46:16.883219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:11.218 [2024-11-20 11:46:16.883229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:11.218 [2024-11-20 11:46:16.883240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:11.218 [2024-11-20 11:46:16.883250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:11.218 [2024-11-20 11:46:16.883260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:11.218 [2024-11-20 11:46:16.883270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:11.218 [2024-11-20 11:46:16.883280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:11.218 [2024-11-20 11:46:16.883290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:11.218 [2024-11-20 11:46:16.883300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:11.218 [2024-11-20 11:46:16.883310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:11.218 [2024-11-20 11:46:16.883320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:11.218 [2024-11-20 11:46:16.883330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:11.218 [2024-11-20 11:46:16.883340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:11.218 [2024-11-20 11:46:16.883350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:11.218 [2024-11-20 11:46:16.883360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:11.218 [2024-11-20 11:46:16.883371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:11.218 [2024-11-20 11:46:16.883380] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:11.218 [2024-11-20 11:46:16.883391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:11.218 [2024-11-20 11:46:16.883402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:11.218 [2024-11-20 11:46:16.883413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:11.218 [2024-11-20 11:46:16.883424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:11.218 [2024-11-20 11:46:16.883435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:11.218 [2024-11-20 11:46:16.883445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:11.218 [2024-11-20 11:46:16.883455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:11.218 [2024-11-20 11:46:16.883465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:11.218 [2024-11-20 11:46:16.883487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:11.218 [2024-11-20 11:46:16.883499] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:11.218 [2024-11-20 11:46:16.883514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:11.218 [2024-11-20 11:46:16.883526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:11.218 [2024-11-20 11:46:16.883538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:11.218 [2024-11-20 11:46:16.883549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:11.218 [2024-11-20 11:46:16.883562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:11.218 [2024-11-20 11:46:16.883574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:11.218 [2024-11-20 11:46:16.883585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:11.218 [2024-11-20 11:46:16.883596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:11.218 [2024-11-20 11:46:16.883607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:11.218 [2024-11-20 11:46:16.883619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:11.218 [2024-11-20 11:46:16.883630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:11.218 [2024-11-20 11:46:16.883642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:11.218 [2024-11-20 11:46:16.883653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:11.218 [2024-11-20 11:46:16.883665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:11.218 [2024-11-20 11:46:16.883677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:11.218 [2024-11-20 11:46:16.883688] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:11.218 [2024-11-20 11:46:16.883704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:11.219 [2024-11-20 11:46:16.883717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:11.219 [2024-11-20 11:46:16.883729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:11.219 [2024-11-20 11:46:16.883740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:11.219 [2024-11-20 11:46:16.883752] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:11.219 [2024-11-20 11:46:16.883765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.219 [2024-11-20 11:46:16.883776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:11.219 [2024-11-20 11:46:16.883788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.301 ms 00:32:11.219 [2024-11-20 11:46:16.883799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.219 [2024-11-20 11:46:16.924327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.219 [2024-11-20 11:46:16.924562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:11.219 [2024-11-20 11:46:16.924723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.465 ms 00:32:11.219 [2024-11-20 11:46:16.924765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.219 [2024-11-20 11:46:16.924902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.219 [2024-11-20 11:46:16.924998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:11.219 [2024-11-20 11:46:16.925061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:32:11.219 [2024-11-20 11:46:16.925097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.478 [2024-11-20 11:46:16.979241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.478 [2024-11-20 11:46:16.979438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:11.478 [2024-11-20 11:46:16.979585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.041 ms 00:32:11.478 [2024-11-20 11:46:16.979629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.478 [2024-11-20 11:46:16.979709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.478 [2024-11-20 11:46:16.979746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:11.479 [2024-11-20 11:46:16.979780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:11.479 [2024-11-20 11:46:16.979937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.479 [2024-11-20 11:46:16.980533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.479 [2024-11-20 11:46:16.980585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:11.479 [2024-11-20 11:46:16.980791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:32:11.479 [2024-11-20 11:46:16.980832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.479 [2024-11-20 11:46:16.980989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.479 [2024-11-20 11:46:16.981221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:11.479 [2024-11-20 11:46:16.981241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:32:11.479 [2024-11-20 11:46:16.981261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.479 [2024-11-20 11:46:17.002214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.479 [2024-11-20 11:46:17.002253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:11.479 [2024-11-20 11:46:17.002271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.921 ms 00:32:11.479 [2024-11-20 11:46:17.002282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.479 [2024-11-20 11:46:17.022206] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:11.479 [2024-11-20 11:46:17.022259] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:11.479 [2024-11-20 11:46:17.022275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.479 [2024-11-20 11:46:17.022286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:11.479 [2024-11-20 11:46:17.022298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.880 ms 00:32:11.479 [2024-11-20 11:46:17.022308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.479 [2024-11-20 11:46:17.054020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.479 [2024-11-20 11:46:17.054195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:11.479 [2024-11-20 11:46:17.054218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.670 ms 00:32:11.479 [2024-11-20 11:46:17.054232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.479 [2024-11-20 11:46:17.073187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.479 [2024-11-20 11:46:17.073326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:11.479 [2024-11-20 11:46:17.073346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.866 ms 00:32:11.479 [2024-11-20 11:46:17.073357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.479 [2024-11-20 11:46:17.092112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.479 [2024-11-20 11:46:17.092244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:11.479 [2024-11-20 11:46:17.092263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.679 ms 00:32:11.479 [2024-11-20 11:46:17.092274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.479 [2024-11-20 11:46:17.093253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.479 [2024-11-20 11:46:17.093285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:11.479 [2024-11-20 11:46:17.093299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:32:11.479 [2024-11-20 11:46:17.093314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.479 [2024-11-20 11:46:17.182780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.479 [2024-11-20 11:46:17.182845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:11.479 [2024-11-20 11:46:17.182884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.441 ms 00:32:11.479 [2024-11-20 11:46:17.182895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.479 [2024-11-20 11:46:17.194216] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:11.479 [2024-11-20 11:46:17.197499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.479 [2024-11-20 11:46:17.197531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:11.479 [2024-11-20 11:46:17.197547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.535 ms 00:32:11.479 [2024-11-20 11:46:17.197559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.479 [2024-11-20 11:46:17.197662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.479 [2024-11-20 11:46:17.197677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:11.479 [2024-11-20 11:46:17.197690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:11.479 [2024-11-20 11:46:17.197706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.479 [2024-11-20 11:46:17.198621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.479 [2024-11-20 11:46:17.198640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:11.479 [2024-11-20 11:46:17.198652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:32:11.479 [2024-11-20 11:46:17.198662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.479 [2024-11-20 11:46:17.198689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.479 [2024-11-20 11:46:17.198700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:11.479 [2024-11-20 11:46:17.198711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:11.479 [2024-11-20 11:46:17.198721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.479 [2024-11-20 11:46:17.198773] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:11.479 [2024-11-20 11:46:17.198790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.479 [2024-11-20 11:46:17.198801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:11.479 [2024-11-20 11:46:17.198813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:32:11.479 [2024-11-20 11:46:17.198824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.738 [2024-11-20 11:46:17.237461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.738 [2024-11-20 11:46:17.237506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:11.738 [2024-11-20 11:46:17.237521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.616 ms 00:32:11.738 [2024-11-20 11:46:17.237538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.738 [2024-11-20 11:46:17.237615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:11.738 [2024-11-20 11:46:17.237628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:11.738 [2024-11-20 11:46:17.237640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:11.738 [2024-11-20 11:46:17.237650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:11.738 [2024-11-20 11:46:17.238824] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 393.000 ms, result 0 00:32:13.130  [2024-11-20T11:46:19.830Z] Copying: 31/1024 [MB] (31 MBps) [2024-11-20T11:46:20.769Z] Copying: 61/1024 [MB] (29 MBps) [2024-11-20T11:46:21.707Z] Copying: 90/1024 [MB] (29 MBps) [2024-11-20T11:46:22.643Z] Copying: 119/1024 [MB] (28 MBps) [2024-11-20T11:46:23.580Z] Copying: 150/1024 [MB] (31 MBps) [2024-11-20T11:46:24.516Z] Copying: 181/1024 [MB] (31 MBps) [2024-11-20T11:46:25.893Z] Copying: 213/1024 [MB] (31 MBps) [2024-11-20T11:46:26.829Z] Copying: 245/1024 [MB] (31 MBps) [2024-11-20T11:46:27.765Z] Copying: 277/1024 [MB] (31 MBps) [2024-11-20T11:46:28.701Z] Copying: 309/1024 [MB] (31 MBps) [2024-11-20T11:46:29.636Z] Copying: 340/1024 [MB] (31 MBps) [2024-11-20T11:46:30.573Z] Copying: 372/1024 [MB] (31 MBps) [2024-11-20T11:46:31.511Z] Copying: 403/1024 [MB] (31 MBps) [2024-11-20T11:46:32.906Z] Copying: 435/1024 [MB] (31 MBps) [2024-11-20T11:46:33.557Z] Copying: 467/1024 [MB] (31 MBps) [2024-11-20T11:46:34.495Z] Copying: 498/1024 [MB] (31 MBps) [2024-11-20T11:46:35.871Z] Copying: 528/1024 [MB] (30 MBps) [2024-11-20T11:46:36.807Z] Copying: 559/1024 [MB] (30 MBps) [2024-11-20T11:46:37.745Z] Copying: 589/1024 [MB] (30 MBps) [2024-11-20T11:46:38.681Z] Copying: 620/1024 [MB] (31 MBps) [2024-11-20T11:46:39.616Z] Copying: 652/1024 [MB] (31 MBps) [2024-11-20T11:46:40.553Z] Copying: 682/1024 [MB] (29 MBps) [2024-11-20T11:46:41.486Z] Copying: 713/1024 [MB] (30 MBps) [2024-11-20T11:46:42.859Z] Copying: 744/1024 [MB] (30 MBps) [2024-11-20T11:46:43.796Z] Copying: 774/1024 [MB] (30 MBps) [2024-11-20T11:46:44.734Z] Copying: 805/1024 [MB] (31 MBps) [2024-11-20T11:46:45.670Z] Copying: 836/1024 [MB] (30 MBps) [2024-11-20T11:46:46.600Z] Copying: 867/1024 [MB] (30 MBps) [2024-11-20T11:46:47.532Z] Copying: 898/1024 [MB] (30 MBps) [2024-11-20T11:46:48.905Z] Copying: 929/1024 [MB] (30 MBps) [2024-11-20T11:46:49.841Z] Copying: 958/1024 [MB] (29 MBps) [2024-11-20T11:46:50.778Z] Copying: 989/1024 [MB] (30 MBps) [2024-11-20T11:46:50.778Z] Copying: 1020/1024 [MB] (30 MBps) [2024-11-20T11:46:51.038Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-20 11:46:50.799337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.276 [2024-11-20 11:46:50.799396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:45.276 [2024-11-20 11:46:50.799413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:45.276 [2024-11-20 11:46:50.799424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.276 [2024-11-20 11:46:50.799448] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:45.276 [2024-11-20 11:46:50.803666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.276 [2024-11-20 11:46:50.803704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:45.276 [2024-11-20 11:46:50.803724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.200 ms 00:32:45.276 [2024-11-20 11:46:50.803735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.276 [2024-11-20 11:46:50.803935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.276 [2024-11-20 11:46:50.803948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:45.276 [2024-11-20 11:46:50.803960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:32:45.276 [2024-11-20 11:46:50.803971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.276 [2024-11-20 11:46:50.806728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.276 [2024-11-20 11:46:50.806752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:45.276 [2024-11-20 11:46:50.806765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.742 ms 00:32:45.276 [2024-11-20 11:46:50.806776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.276 [2024-11-20 11:46:50.813010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.276 [2024-11-20 11:46:50.813058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:45.276 [2024-11-20 11:46:50.813072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.209 ms 00:32:45.276 [2024-11-20 11:46:50.813082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.276 [2024-11-20 11:46:50.850289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.276 [2024-11-20 11:46:50.850338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:45.276 [2024-11-20 11:46:50.850368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.132 ms 00:32:45.276 [2024-11-20 11:46:50.850378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.276 [2024-11-20 11:46:50.870866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.276 [2024-11-20 11:46:50.870904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:45.276 [2024-11-20 11:46:50.870934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.447 ms 00:32:45.276 [2024-11-20 11:46:50.870945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.276 [2024-11-20 11:46:50.872785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.276 [2024-11-20 11:46:50.872828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:45.276 [2024-11-20 11:46:50.872841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.798 ms 00:32:45.276 [2024-11-20 11:46:50.872852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.276 [2024-11-20 11:46:50.910357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.276 [2024-11-20 11:46:50.910393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:45.276 [2024-11-20 11:46:50.910422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.487 ms 00:32:45.276 [2024-11-20 11:46:50.910433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.276 [2024-11-20 11:46:50.945613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.276 [2024-11-20 11:46:50.945660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:45.276 [2024-11-20 11:46:50.945689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.140 ms 00:32:45.276 [2024-11-20 11:46:50.945700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.276 [2024-11-20 11:46:50.979670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.276 [2024-11-20 11:46:50.979705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:45.276 [2024-11-20 11:46:50.979716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.933 ms 00:32:45.276 [2024-11-20 11:46:50.979726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.276 [2024-11-20 11:46:51.015835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.277 [2024-11-20 11:46:51.015875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:45.277 [2024-11-20 11:46:51.015888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.034 ms 00:32:45.277 [2024-11-20 11:46:51.015898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.277 [2024-11-20 11:46:51.015937] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:45.277 [2024-11-20 11:46:51.015956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:45.277 [2024-11-20 11:46:51.015975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:32:45.277 [2024-11-20 11:46:51.015987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.015998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:45.277 [2024-11-20 11:46:51.016767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.016995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.017005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.017015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.017026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.017037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:45.278 [2024-11-20 11:46:51.017061] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:45.278 [2024-11-20 11:46:51.017075] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bf3c597d-5cce-48af-a3f6-c932c9b7fc69 00:32:45.278 [2024-11-20 11:46:51.017086] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:32:45.278 [2024-11-20 11:46:51.017096] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:45.278 [2024-11-20 11:46:51.017107] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:45.278 [2024-11-20 11:46:51.017117] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:45.278 [2024-11-20 11:46:51.017127] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:45.278 [2024-11-20 11:46:51.017137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:45.278 [2024-11-20 11:46:51.017158] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:45.278 [2024-11-20 11:46:51.017167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:45.278 [2024-11-20 11:46:51.017176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:45.278 [2024-11-20 11:46:51.017186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.278 [2024-11-20 11:46:51.017196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:45.278 [2024-11-20 11:46:51.017207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:32:45.278 [2024-11-20 11:46:51.017218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.539 [2024-11-20 11:46:51.037733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.539 [2024-11-20 11:46:51.037769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:45.539 [2024-11-20 11:46:51.037782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.456 ms 00:32:45.539 [2024-11-20 11:46:51.037792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.539 [2024-11-20 11:46:51.038276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.539 [2024-11-20 11:46:51.038299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:45.539 [2024-11-20 11:46:51.038316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:32:45.539 [2024-11-20 11:46:51.038327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.539 [2024-11-20 11:46:51.090871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.539 [2024-11-20 11:46:51.090912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:45.539 [2024-11-20 11:46:51.090925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.539 [2024-11-20 11:46:51.090936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.539 [2024-11-20 11:46:51.090993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.539 [2024-11-20 11:46:51.091004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:45.539 [2024-11-20 11:46:51.091019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.539 [2024-11-20 11:46:51.091030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.539 [2024-11-20 11:46:51.091100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.539 [2024-11-20 11:46:51.091113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:45.539 [2024-11-20 11:46:51.091124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.539 [2024-11-20 11:46:51.091134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.539 [2024-11-20 11:46:51.091150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.539 [2024-11-20 11:46:51.091161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:45.539 [2024-11-20 11:46:51.091187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.539 [2024-11-20 11:46:51.091202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.539 [2024-11-20 11:46:51.222622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.539 [2024-11-20 11:46:51.222686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:45.539 [2024-11-20 11:46:51.222703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.539 [2024-11-20 11:46:51.222713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.812 [2024-11-20 11:46:51.328991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.812 [2024-11-20 11:46:51.329052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:45.812 [2024-11-20 11:46:51.329067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.812 [2024-11-20 11:46:51.329086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.812 [2024-11-20 11:46:51.329191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.812 [2024-11-20 11:46:51.329204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:45.812 [2024-11-20 11:46:51.329216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.812 [2024-11-20 11:46:51.329226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.812 [2024-11-20 11:46:51.329272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.812 [2024-11-20 11:46:51.329284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:45.812 [2024-11-20 11:46:51.329294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.812 [2024-11-20 11:46:51.329305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.812 [2024-11-20 11:46:51.329441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.812 [2024-11-20 11:46:51.329456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:45.812 [2024-11-20 11:46:51.329468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.812 [2024-11-20 11:46:51.329480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.812 [2024-11-20 11:46:51.329534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.812 [2024-11-20 11:46:51.329548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:45.812 [2024-11-20 11:46:51.329560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.812 [2024-11-20 11:46:51.329572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.812 [2024-11-20 11:46:51.329618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.812 [2024-11-20 11:46:51.329631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:45.812 [2024-11-20 11:46:51.329644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.812 [2024-11-20 11:46:51.329655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.812 [2024-11-20 11:46:51.329700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.812 [2024-11-20 11:46:51.329714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:45.812 [2024-11-20 11:46:51.329725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.812 [2024-11-20 11:46:51.329736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.812 [2024-11-20 11:46:51.329863] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 530.490 ms, result 0 00:32:46.785 00:32:46.785 00:32:46.786 11:46:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:48.692 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:32:48.692 11:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:32:48.692 11:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:32:48.692 11:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:48.692 11:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:48.692 11:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:32:48.950 11:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:48.950 11:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:48.950 Process with pid 81604 is not found 00:32:48.950 11:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81604 00:32:48.950 11:46:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81604 ']' 00:32:48.950 11:46:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81604 00:32:48.950 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81604) - No such process 00:32:48.950 11:46:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81604 is not found' 00:32:48.950 11:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:32:49.210 11:46:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:32:49.210 11:46:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:49.210 Remove shared memory files 00:32:49.210 11:46:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:49.210 11:46:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:49.210 11:46:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:32:49.210 11:46:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:49.210 11:46:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:49.210 00:32:49.210 real 3m17.294s 00:32:49.210 user 3m41.852s 00:32:49.210 sys 0m37.811s 00:32:49.210 11:46:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:49.210 ************************************ 00:32:49.210 END TEST ftl_dirty_shutdown 00:32:49.210 11:46:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:49.210 ************************************ 00:32:49.210 11:46:54 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:49.210 11:46:54 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:49.210 11:46:54 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:49.210 11:46:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:49.210 ************************************ 00:32:49.210 START TEST ftl_upgrade_shutdown 00:32:49.210 ************************************ 00:32:49.210 11:46:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:49.470 * Looking for test storage... 00:32:49.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:49.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.470 --rc genhtml_branch_coverage=1 00:32:49.470 --rc genhtml_function_coverage=1 00:32:49.470 --rc genhtml_legend=1 00:32:49.470 --rc geninfo_all_blocks=1 00:32:49.470 --rc geninfo_unexecuted_blocks=1 00:32:49.470 00:32:49.470 ' 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:49.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.470 --rc genhtml_branch_coverage=1 00:32:49.470 --rc genhtml_function_coverage=1 00:32:49.470 --rc genhtml_legend=1 00:32:49.470 --rc geninfo_all_blocks=1 00:32:49.470 --rc geninfo_unexecuted_blocks=1 00:32:49.470 00:32:49.470 ' 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:49.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.470 --rc genhtml_branch_coverage=1 00:32:49.470 --rc genhtml_function_coverage=1 00:32:49.470 --rc genhtml_legend=1 00:32:49.470 --rc geninfo_all_blocks=1 00:32:49.470 --rc geninfo_unexecuted_blocks=1 00:32:49.470 00:32:49.470 ' 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:49.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.470 --rc genhtml_branch_coverage=1 00:32:49.470 --rc genhtml_function_coverage=1 00:32:49.470 --rc genhtml_legend=1 00:32:49.470 --rc geninfo_all_blocks=1 00:32:49.470 --rc geninfo_unexecuted_blocks=1 00:32:49.470 00:32:49.470 ' 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:49.470 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83706 00:32:49.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83706 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83706 ']' 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:49.471 11:46:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:49.730 [2024-11-20 11:46:55.282881] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:32:49.730 [2024-11-20 11:46:55.283627] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83706 ] 00:32:49.730 [2024-11-20 11:46:55.484334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.990 [2024-11-20 11:46:55.651051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.927 11:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:50.927 11:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:50.927 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:50.927 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:32:50.927 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:32:50.927 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:50.927 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:32:50.928 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:50.928 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:32:50.928 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:50.928 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:32:50.928 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:50.928 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:32:50.928 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:50.928 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:32:50.928 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:50.928 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:32:50.928 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:32:50.928 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:32:50.928 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:50.928 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:32:50.928 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:32:50.928 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:32:51.187 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:32:51.187 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:32:51.187 11:46:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:32:51.187 11:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:32:51.187 11:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:51.187 11:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:51.187 11:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:51.187 11:46:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:32:51.447 11:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:51.447 { 00:32:51.447 "name": "basen1", 00:32:51.447 "aliases": [ 00:32:51.447 "11563c9e-37db-4b2b-9620-19a4fd9eb07b" 00:32:51.447 ], 00:32:51.447 "product_name": "NVMe disk", 00:32:51.447 "block_size": 4096, 00:32:51.447 "num_blocks": 1310720, 00:32:51.447 "uuid": "11563c9e-37db-4b2b-9620-19a4fd9eb07b", 00:32:51.447 "numa_id": -1, 00:32:51.447 "assigned_rate_limits": { 00:32:51.447 "rw_ios_per_sec": 0, 00:32:51.447 "rw_mbytes_per_sec": 0, 00:32:51.447 "r_mbytes_per_sec": 0, 00:32:51.447 "w_mbytes_per_sec": 0 00:32:51.447 }, 00:32:51.447 "claimed": true, 00:32:51.447 "claim_type": "read_many_write_one", 00:32:51.447 "zoned": false, 00:32:51.447 "supported_io_types": { 00:32:51.447 "read": true, 00:32:51.447 "write": true, 00:32:51.447 "unmap": true, 00:32:51.447 "flush": true, 00:32:51.447 "reset": true, 00:32:51.447 "nvme_admin": true, 00:32:51.447 "nvme_io": true, 00:32:51.447 "nvme_io_md": false, 00:32:51.447 "write_zeroes": true, 00:32:51.447 "zcopy": false, 00:32:51.447 "get_zone_info": false, 00:32:51.447 "zone_management": false, 00:32:51.447 "zone_append": false, 00:32:51.447 "compare": true, 00:32:51.447 "compare_and_write": false, 00:32:51.447 "abort": true, 00:32:51.447 "seek_hole": false, 00:32:51.447 "seek_data": false, 00:32:51.447 "copy": true, 00:32:51.447 "nvme_iov_md": false 00:32:51.447 }, 00:32:51.447 "driver_specific": { 00:32:51.447 "nvme": [ 00:32:51.447 { 00:32:51.447 "pci_address": "0000:00:11.0", 00:32:51.447 "trid": { 00:32:51.447 "trtype": "PCIe", 00:32:51.447 "traddr": "0000:00:11.0" 00:32:51.447 }, 00:32:51.447 "ctrlr_data": { 00:32:51.447 "cntlid": 0, 00:32:51.447 "vendor_id": "0x1b36", 00:32:51.447 "model_number": "QEMU NVMe Ctrl", 00:32:51.447 "serial_number": "12341", 00:32:51.447 "firmware_revision": "8.0.0", 00:32:51.447 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:51.447 "oacs": { 00:32:51.447 "security": 0, 00:32:51.447 "format": 1, 00:32:51.447 "firmware": 0, 00:32:51.447 "ns_manage": 1 00:32:51.447 }, 00:32:51.447 "multi_ctrlr": false, 00:32:51.447 "ana_reporting": false 00:32:51.447 }, 00:32:51.447 "vs": { 00:32:51.447 "nvme_version": "1.4" 00:32:51.447 }, 00:32:51.447 "ns_data": { 00:32:51.447 "id": 1, 00:32:51.447 "can_share": false 00:32:51.447 } 00:32:51.447 } 00:32:51.447 ], 00:32:51.447 "mp_policy": "active_passive" 00:32:51.447 } 00:32:51.447 } 00:32:51.447 ]' 00:32:51.447 11:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:51.447 11:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:51.447 11:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:51.707 11:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:51.707 11:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:51.707 11:46:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:32:51.707 11:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:32:51.707 11:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:32:51.707 11:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:32:51.707 11:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:51.707 11:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:51.707 11:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=08ba8b23-f709-40ee-9f29-586ab37c03e3 00:32:51.707 11:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:32:51.707 11:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 08ba8b23-f709-40ee-9f29-586ab37c03e3 00:32:51.966 11:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:32:52.225 11:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=4cdf7ccf-ab3c-44c7-9965-b87d81d31ae5 00:32:52.225 11:46:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 4cdf7ccf-ab3c-44c7-9965-b87d81d31ae5 00:32:52.484 11:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=d68feabd-4d49-45b6-8dbe-a49819854f9b 00:32:52.484 11:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z d68feabd-4d49-45b6-8dbe-a49819854f9b ]] 00:32:52.484 11:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 d68feabd-4d49-45b6-8dbe-a49819854f9b 5120 00:32:52.484 11:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:32:52.484 11:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:52.484 11:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=d68feabd-4d49-45b6-8dbe-a49819854f9b 00:32:52.484 11:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:32:52.484 11:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size d68feabd-4d49-45b6-8dbe-a49819854f9b 00:32:52.484 11:46:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d68feabd-4d49-45b6-8dbe-a49819854f9b 00:32:52.484 11:46:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:52.484 11:46:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:52.484 11:46:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:52.484 11:46:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d68feabd-4d49-45b6-8dbe-a49819854f9b 00:32:52.745 11:46:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:52.745 { 00:32:52.745 "name": "d68feabd-4d49-45b6-8dbe-a49819854f9b", 00:32:52.745 "aliases": [ 00:32:52.745 "lvs/basen1p0" 00:32:52.745 ], 00:32:52.745 "product_name": "Logical Volume", 00:32:52.745 "block_size": 4096, 00:32:52.745 "num_blocks": 5242880, 00:32:52.745 "uuid": "d68feabd-4d49-45b6-8dbe-a49819854f9b", 00:32:52.745 "assigned_rate_limits": { 00:32:52.745 "rw_ios_per_sec": 0, 00:32:52.745 "rw_mbytes_per_sec": 0, 00:32:52.745 "r_mbytes_per_sec": 0, 00:32:52.745 "w_mbytes_per_sec": 0 00:32:52.745 }, 00:32:52.745 "claimed": false, 00:32:52.745 "zoned": false, 00:32:52.745 "supported_io_types": { 00:32:52.745 "read": true, 00:32:52.745 "write": true, 00:32:52.745 "unmap": true, 00:32:52.745 "flush": false, 00:32:52.745 "reset": true, 00:32:52.745 "nvme_admin": false, 00:32:52.745 "nvme_io": false, 00:32:52.745 "nvme_io_md": false, 00:32:52.745 "write_zeroes": true, 00:32:52.745 "zcopy": false, 00:32:52.745 "get_zone_info": false, 00:32:52.745 "zone_management": false, 00:32:52.745 "zone_append": false, 00:32:52.745 "compare": false, 00:32:52.745 "compare_and_write": false, 00:32:52.745 "abort": false, 00:32:52.745 "seek_hole": true, 00:32:52.745 "seek_data": true, 00:32:52.745 "copy": false, 00:32:52.745 "nvme_iov_md": false 00:32:52.745 }, 00:32:52.745 "driver_specific": { 00:32:52.745 "lvol": { 00:32:52.745 "lvol_store_uuid": "4cdf7ccf-ab3c-44c7-9965-b87d81d31ae5", 00:32:52.745 "base_bdev": "basen1", 00:32:52.745 "thin_provision": true, 00:32:52.745 "num_allocated_clusters": 0, 00:32:52.745 "snapshot": false, 00:32:52.745 "clone": false, 00:32:52.745 "esnap_clone": false 00:32:52.745 } 00:32:52.745 } 00:32:52.745 } 00:32:52.745 ]' 00:32:52.745 11:46:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:52.745 11:46:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:52.745 11:46:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:53.005 11:46:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:32:53.005 11:46:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:32:53.005 11:46:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:32:53.005 11:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:32:53.005 11:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:32:53.005 11:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:32:53.264 11:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:32:53.264 11:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:32:53.264 11:46:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:32:53.530 11:46:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:32:53.530 11:46:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:32:53.530 11:46:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d d68feabd-4d49-45b6-8dbe-a49819854f9b -c cachen1p0 --l2p_dram_limit 2 00:32:53.530 [2024-11-20 11:46:59.221198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.530 [2024-11-20 11:46:59.221248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:53.530 [2024-11-20 11:46:59.221285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:53.530 [2024-11-20 11:46:59.221297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.530 [2024-11-20 11:46:59.221366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.531 [2024-11-20 11:46:59.221378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:53.531 [2024-11-20 11:46:59.221403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:32:53.531 [2024-11-20 11:46:59.221414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.531 [2024-11-20 11:46:59.221438] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:53.531 [2024-11-20 11:46:59.222502] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:53.531 [2024-11-20 11:46:59.222531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.531 [2024-11-20 11:46:59.222542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:53.531 [2024-11-20 11:46:59.222555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.095 ms 00:32:53.531 [2024-11-20 11:46:59.222566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.531 [2024-11-20 11:46:59.222650] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID d65cc6e7-c7de-4cca-b53b-ed23fb5acccf 00:32:53.531 [2024-11-20 11:46:59.224078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.531 [2024-11-20 11:46:59.224111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:32:53.531 [2024-11-20 11:46:59.224124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:32:53.531 [2024-11-20 11:46:59.224137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.531 [2024-11-20 11:46:59.231458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.531 [2024-11-20 11:46:59.231495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:53.531 [2024-11-20 11:46:59.231509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.274 ms 00:32:53.531 [2024-11-20 11:46:59.231522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.531 [2024-11-20 11:46:59.231566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.531 [2024-11-20 11:46:59.231582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:53.531 [2024-11-20 11:46:59.231593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:32:53.531 [2024-11-20 11:46:59.231608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.531 [2024-11-20 11:46:59.231676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.531 [2024-11-20 11:46:59.231691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:53.531 [2024-11-20 11:46:59.231701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:53.531 [2024-11-20 11:46:59.231718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.531 [2024-11-20 11:46:59.231743] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:53.531 [2024-11-20 11:46:59.237246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.531 [2024-11-20 11:46:59.237276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:53.531 [2024-11-20 11:46:59.237306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.506 ms 00:32:53.531 [2024-11-20 11:46:59.237317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.531 [2024-11-20 11:46:59.237350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.531 [2024-11-20 11:46:59.237361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:53.531 [2024-11-20 11:46:59.237373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:53.531 [2024-11-20 11:46:59.237383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.531 [2024-11-20 11:46:59.237422] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:32:53.531 [2024-11-20 11:46:59.237564] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:53.531 [2024-11-20 11:46:59.237585] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:53.531 [2024-11-20 11:46:59.237599] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:53.531 [2024-11-20 11:46:59.237615] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:53.531 [2024-11-20 11:46:59.237627] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:53.531 [2024-11-20 11:46:59.237641] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:53.531 [2024-11-20 11:46:59.237652] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:53.531 [2024-11-20 11:46:59.237667] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:53.531 [2024-11-20 11:46:59.237676] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:53.531 [2024-11-20 11:46:59.237689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.531 [2024-11-20 11:46:59.237699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:53.531 [2024-11-20 11:46:59.237713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.268 ms 00:32:53.531 [2024-11-20 11:46:59.237724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.531 [2024-11-20 11:46:59.237800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.531 [2024-11-20 11:46:59.237811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:53.531 [2024-11-20 11:46:59.237824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:32:53.531 [2024-11-20 11:46:59.237844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.531 [2024-11-20 11:46:59.237950] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:53.531 [2024-11-20 11:46:59.237962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:53.531 [2024-11-20 11:46:59.237975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:53.531 [2024-11-20 11:46:59.237986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:53.531 [2024-11-20 11:46:59.237999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:53.531 [2024-11-20 11:46:59.238008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:53.531 [2024-11-20 11:46:59.238020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:53.531 [2024-11-20 11:46:59.238032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:53.531 [2024-11-20 11:46:59.238044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:53.531 [2024-11-20 11:46:59.238054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:53.531 [2024-11-20 11:46:59.238065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:53.531 [2024-11-20 11:46:59.238075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:53.531 [2024-11-20 11:46:59.238086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:53.531 [2024-11-20 11:46:59.238097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:53.531 [2024-11-20 11:46:59.238110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:53.531 [2024-11-20 11:46:59.238119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:53.531 [2024-11-20 11:46:59.238134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:53.531 [2024-11-20 11:46:59.238143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:53.531 [2024-11-20 11:46:59.238155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:53.531 [2024-11-20 11:46:59.238165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:53.531 [2024-11-20 11:46:59.238177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:53.531 [2024-11-20 11:46:59.238186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:53.531 [2024-11-20 11:46:59.238198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:53.531 [2024-11-20 11:46:59.238207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:53.531 [2024-11-20 11:46:59.238219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:53.531 [2024-11-20 11:46:59.238229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:53.531 [2024-11-20 11:46:59.238240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:53.531 [2024-11-20 11:46:59.238249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:53.531 [2024-11-20 11:46:59.238260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:53.531 [2024-11-20 11:46:59.238270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:53.531 [2024-11-20 11:46:59.238282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:53.531 [2024-11-20 11:46:59.238291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:53.531 [2024-11-20 11:46:59.238305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:53.531 [2024-11-20 11:46:59.238315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:53.531 [2024-11-20 11:46:59.238326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:53.531 [2024-11-20 11:46:59.238336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:53.531 [2024-11-20 11:46:59.238347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:53.531 [2024-11-20 11:46:59.238356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:53.531 [2024-11-20 11:46:59.238368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:53.531 [2024-11-20 11:46:59.238378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:53.531 [2024-11-20 11:46:59.238391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:53.531 [2024-11-20 11:46:59.238400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:53.531 [2024-11-20 11:46:59.238411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:53.531 [2024-11-20 11:46:59.238420] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:53.531 [2024-11-20 11:46:59.238433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:53.531 [2024-11-20 11:46:59.238443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:53.532 [2024-11-20 11:46:59.238455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:53.532 [2024-11-20 11:46:59.238465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:53.532 [2024-11-20 11:46:59.238492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:53.532 [2024-11-20 11:46:59.238501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:53.532 [2024-11-20 11:46:59.238513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:53.532 [2024-11-20 11:46:59.238522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:53.532 [2024-11-20 11:46:59.238534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:53.532 [2024-11-20 11:46:59.238548] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:53.532 [2024-11-20 11:46:59.238563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:53.532 [2024-11-20 11:46:59.238577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:53.532 [2024-11-20 11:46:59.238590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:53.532 [2024-11-20 11:46:59.238601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:53.532 [2024-11-20 11:46:59.238613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:53.532 [2024-11-20 11:46:59.238624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:53.532 [2024-11-20 11:46:59.238638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:53.532 [2024-11-20 11:46:59.238649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:53.532 [2024-11-20 11:46:59.238662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:53.532 [2024-11-20 11:46:59.238672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:53.532 [2024-11-20 11:46:59.238689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:53.532 [2024-11-20 11:46:59.238699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:53.532 [2024-11-20 11:46:59.238712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:53.532 [2024-11-20 11:46:59.238722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:53.532 [2024-11-20 11:46:59.238735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:53.532 [2024-11-20 11:46:59.238745] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:53.532 [2024-11-20 11:46:59.238759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:53.532 [2024-11-20 11:46:59.238770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:53.532 [2024-11-20 11:46:59.238783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:53.532 [2024-11-20 11:46:59.238793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:53.532 [2024-11-20 11:46:59.238806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:53.532 [2024-11-20 11:46:59.238816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:53.532 [2024-11-20 11:46:59.238829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:53.532 [2024-11-20 11:46:59.238839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.926 ms 00:32:53.532 [2024-11-20 11:46:59.238852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:53.532 [2024-11-20 11:46:59.238894] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:53.532 [2024-11-20 11:46:59.238912] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:58.803 [2024-11-20 11:47:03.650351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.803 [2024-11-20 11:47:03.650417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:58.803 [2024-11-20 11:47:03.650434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4411.432 ms 00:32:58.803 [2024-11-20 11:47:03.650448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.803 [2024-11-20 11:47:03.688133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.803 [2024-11-20 11:47:03.688178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:58.803 [2024-11-20 11:47:03.688194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.359 ms 00:32:58.803 [2024-11-20 11:47:03.688207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.803 [2024-11-20 11:47:03.688311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.803 [2024-11-20 11:47:03.688327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:58.803 [2024-11-20 11:47:03.688339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:32:58.803 [2024-11-20 11:47:03.688354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.803 [2024-11-20 11:47:03.732425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.803 [2024-11-20 11:47:03.732467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:58.803 [2024-11-20 11:47:03.732489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.009 ms 00:32:58.803 [2024-11-20 11:47:03.732501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.803 [2024-11-20 11:47:03.732541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.803 [2024-11-20 11:47:03.732558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:58.803 [2024-11-20 11:47:03.732569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:58.803 [2024-11-20 11:47:03.732580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.803 [2024-11-20 11:47:03.733066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.803 [2024-11-20 11:47:03.733084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:58.803 [2024-11-20 11:47:03.733111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.426 ms 00:32:58.803 [2024-11-20 11:47:03.733124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.803 [2024-11-20 11:47:03.733174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.803 [2024-11-20 11:47:03.733188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:58.803 [2024-11-20 11:47:03.733202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:32:58.803 [2024-11-20 11:47:03.733218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.803 [2024-11-20 11:47:03.752582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.803 [2024-11-20 11:47:03.752622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:58.803 [2024-11-20 11:47:03.752635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.344 ms 00:32:58.803 [2024-11-20 11:47:03.752647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.803 [2024-11-20 11:47:03.764742] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:58.803 [2024-11-20 11:47:03.765809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.803 [2024-11-20 11:47:03.765832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:58.803 [2024-11-20 11:47:03.765848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.061 ms 00:32:58.803 [2024-11-20 11:47:03.765858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.803 [2024-11-20 11:47:03.816353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.803 [2024-11-20 11:47:03.816391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:32:58.803 [2024-11-20 11:47:03.816408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 50.461 ms 00:32:58.803 [2024-11-20 11:47:03.816418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.803 [2024-11-20 11:47:03.816513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.803 [2024-11-20 11:47:03.816544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:58.803 [2024-11-20 11:47:03.816562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:32:58.803 [2024-11-20 11:47:03.816572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.803 [2024-11-20 11:47:03.851858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.803 [2024-11-20 11:47:03.851887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:32:58.803 [2024-11-20 11:47:03.851903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.231 ms 00:32:58.803 [2024-11-20 11:47:03.851913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.803 [2024-11-20 11:47:03.886981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.803 [2024-11-20 11:47:03.887010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:32:58.804 [2024-11-20 11:47:03.887025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.022 ms 00:32:58.804 [2024-11-20 11:47:03.887035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.804 [2024-11-20 11:47:03.887792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.804 [2024-11-20 11:47:03.887812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:58.804 [2024-11-20 11:47:03.887827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.718 ms 00:32:58.804 [2024-11-20 11:47:03.887836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.804 [2024-11-20 11:47:04.014372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.804 [2024-11-20 11:47:04.014410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:32:58.804 [2024-11-20 11:47:04.014448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 126.476 ms 00:32:58.804 [2024-11-20 11:47:04.014459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.804 [2024-11-20 11:47:04.051521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.804 [2024-11-20 11:47:04.051559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:32:58.804 [2024-11-20 11:47:04.051603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.985 ms 00:32:58.804 [2024-11-20 11:47:04.051614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.804 [2024-11-20 11:47:04.088344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.804 [2024-11-20 11:47:04.088377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:32:58.804 [2024-11-20 11:47:04.088409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.696 ms 00:32:58.804 [2024-11-20 11:47:04.088419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.804 [2024-11-20 11:47:04.124068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.804 [2024-11-20 11:47:04.124100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:58.804 [2024-11-20 11:47:04.124115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.621 ms 00:32:58.804 [2024-11-20 11:47:04.124125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.804 [2024-11-20 11:47:04.124155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.804 [2024-11-20 11:47:04.124166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:58.804 [2024-11-20 11:47:04.124182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:58.804 [2024-11-20 11:47:04.124191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.804 [2024-11-20 11:47:04.124283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:58.804 [2024-11-20 11:47:04.124294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:58.804 [2024-11-20 11:47:04.124309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:32:58.804 [2024-11-20 11:47:04.124318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:58.804 [2024-11-20 11:47:04.125489] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4903.701 ms, result 0 00:32:58.804 { 00:32:58.804 "name": "ftl", 00:32:58.804 "uuid": "d65cc6e7-c7de-4cca-b53b-ed23fb5acccf" 00:32:58.804 } 00:32:58.804 11:47:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:32:58.804 [2024-11-20 11:47:04.424671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:58.804 11:47:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:32:59.063 11:47:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:32:59.322 [2024-11-20 11:47:05.017182] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:59.322 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:32:59.581 [2024-11-20 11:47:05.271056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:59.581 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:59.841 Fill FTL, iteration 1 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83845 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83845 /var/tmp/spdk.tgt.sock 00:32:59.841 11:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83845 ']' 00:33:00.099 11:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:33:00.099 11:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:00.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:33:00.099 11:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:33:00.099 11:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:00.099 11:47:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:00.099 [2024-11-20 11:47:05.692127] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:33:00.099 [2024-11-20 11:47:05.692246] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83845 ] 00:33:00.358 [2024-11-20 11:47:05.875145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.358 [2024-11-20 11:47:06.044492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:01.293 11:47:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:01.293 11:47:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:01.293 11:47:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:33:01.551 ftln1 00:33:01.552 11:47:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:33:01.552 11:47:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:33:01.810 11:47:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:33:01.810 11:47:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83845 00:33:01.810 11:47:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83845 ']' 00:33:01.810 11:47:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83845 00:33:01.810 11:47:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:01.810 11:47:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:01.810 11:47:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83845 00:33:01.810 11:47:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:01.810 11:47:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:01.810 killing process with pid 83845 00:33:01.810 11:47:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83845' 00:33:01.810 11:47:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83845 00:33:01.810 11:47:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83845 00:33:04.396 11:47:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:33:04.396 11:47:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:33:04.396 [2024-11-20 11:47:10.011778] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:33:04.396 [2024-11-20 11:47:10.011952] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83911 ] 00:33:04.654 [2024-11-20 11:47:10.205184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.654 [2024-11-20 11:47:10.318061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:06.031  [2024-11-20T11:47:13.169Z] Copying: 238/1024 [MB] (238 MBps) [2024-11-20T11:47:14.103Z] Copying: 476/1024 [MB] (238 MBps) [2024-11-20T11:47:15.040Z] Copying: 715/1024 [MB] (239 MBps) [2024-11-20T11:47:15.298Z] Copying: 955/1024 [MB] (240 MBps) [2024-11-20T11:47:16.233Z] Copying: 1024/1024 [MB] (average 237 MBps) 00:33:10.471 00:33:10.729 Calculate MD5 checksum, iteration 1 00:33:10.729 11:47:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:33:10.729 11:47:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:33:10.729 11:47:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:10.729 11:47:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:10.729 11:47:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:10.729 11:47:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:10.729 11:47:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:10.729 11:47:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:10.729 [2024-11-20 11:47:16.352967] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:33:10.729 [2024-11-20 11:47:16.353145] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83975 ] 00:33:10.986 [2024-11-20 11:47:16.536332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.986 [2024-11-20 11:47:16.649998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.358  [2024-11-20T11:47:19.053Z] Copying: 647/1024 [MB] (647 MBps) [2024-11-20T11:47:19.617Z] Copying: 1024/1024 [MB] (average 638 MBps) 00:33:13.855 00:33:14.113 11:47:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:33:14.113 11:47:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:16.016 11:47:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:33:16.016 Fill FTL, iteration 2 00:33:16.016 11:47:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=cb854ce6b2442b029dc9c58a3bebaf87 00:33:16.016 11:47:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:33:16.016 11:47:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:33:16.016 11:47:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:33:16.016 11:47:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:33:16.016 11:47:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:16.016 11:47:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:16.016 11:47:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:16.016 11:47:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:16.016 11:47:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:33:16.016 [2024-11-20 11:47:21.619772] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:33:16.016 [2024-11-20 11:47:21.620206] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84035 ] 00:33:16.275 [2024-11-20 11:47:21.816006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.275 [2024-11-20 11:47:21.972496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.180  [2024-11-20T11:47:24.511Z] Copying: 238/1024 [MB] (238 MBps) [2024-11-20T11:47:25.446Z] Copying: 470/1024 [MB] (232 MBps) [2024-11-20T11:47:26.822Z] Copying: 705/1024 [MB] (235 MBps) [2024-11-20T11:47:26.822Z] Copying: 943/1024 [MB] (238 MBps) [2024-11-20T11:47:28.197Z] Copying: 1024/1024 [MB] (average 235 MBps) 00:33:22.435 00:33:22.435 Calculate MD5 checksum, iteration 2 00:33:22.435 11:47:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:33:22.435 11:47:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:33:22.435 11:47:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:22.435 11:47:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:22.435 11:47:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:22.435 11:47:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:22.435 11:47:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:22.435 11:47:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:22.435 [2024-11-20 11:47:28.017376] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:33:22.435 [2024-11-20 11:47:28.017506] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84098 ] 00:33:22.435 [2024-11-20 11:47:28.188308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.694 [2024-11-20 11:47:28.300036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.617  [2024-11-20T11:47:30.674Z] Copying: 665/1024 [MB] (665 MBps) [2024-11-20T11:47:32.046Z] Copying: 1024/1024 [MB] (average 625 MBps) 00:33:26.284 00:33:26.284 11:47:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:33:26.284 11:47:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:28.186 11:47:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:33:28.186 11:47:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=d01297a2c0237a5efe56cf76adadca85 00:33:28.186 11:47:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:33:28.186 11:47:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:33:28.186 11:47:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:28.186 [2024-11-20 11:47:33.815858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.186 [2024-11-20 11:47:33.815908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:28.186 [2024-11-20 11:47:33.815942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:33:28.186 [2024-11-20 11:47:33.815953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.186 [2024-11-20 11:47:33.815986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.186 [2024-11-20 11:47:33.815998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:28.186 [2024-11-20 11:47:33.816009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:28.186 [2024-11-20 11:47:33.816023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.186 [2024-11-20 11:47:33.816044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.186 [2024-11-20 11:47:33.816055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:28.186 [2024-11-20 11:47:33.816065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:28.186 [2024-11-20 11:47:33.816075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.186 [2024-11-20 11:47:33.816136] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.270 ms, result 0 00:33:28.186 true 00:33:28.186 11:47:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:28.444 { 00:33:28.445 "name": "ftl", 00:33:28.445 "properties": [ 00:33:28.445 { 00:33:28.445 "name": "superblock_version", 00:33:28.445 "value": 5, 00:33:28.445 "read-only": true 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "name": "base_device", 00:33:28.445 "bands": [ 00:33:28.445 { 00:33:28.445 "id": 0, 00:33:28.445 "state": "FREE", 00:33:28.445 "validity": 0.0 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 1, 00:33:28.445 "state": "FREE", 00:33:28.445 "validity": 0.0 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 2, 00:33:28.445 "state": "FREE", 00:33:28.445 "validity": 0.0 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 3, 00:33:28.445 "state": "FREE", 00:33:28.445 "validity": 0.0 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 4, 00:33:28.445 "state": "FREE", 00:33:28.445 "validity": 0.0 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 5, 00:33:28.445 "state": "FREE", 00:33:28.445 "validity": 0.0 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 6, 00:33:28.445 "state": "FREE", 00:33:28.445 "validity": 0.0 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 7, 00:33:28.445 "state": "FREE", 00:33:28.445 "validity": 0.0 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 8, 00:33:28.445 "state": "FREE", 00:33:28.445 "validity": 0.0 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 9, 00:33:28.445 "state": "FREE", 00:33:28.445 "validity": 0.0 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 10, 00:33:28.445 "state": "FREE", 00:33:28.445 "validity": 0.0 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 11, 00:33:28.445 "state": "FREE", 00:33:28.445 "validity": 0.0 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 12, 00:33:28.445 "state": "FREE", 00:33:28.445 "validity": 0.0 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 13, 00:33:28.445 "state": "FREE", 00:33:28.445 "validity": 0.0 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 14, 00:33:28.445 "state": "FREE", 00:33:28.445 "validity": 0.0 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 15, 00:33:28.445 "state": "FREE", 00:33:28.445 "validity": 0.0 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 16, 00:33:28.445 "state": "FREE", 00:33:28.445 "validity": 0.0 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 17, 00:33:28.445 "state": "FREE", 00:33:28.445 "validity": 0.0 00:33:28.445 } 00:33:28.445 ], 00:33:28.445 "read-only": true 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "name": "cache_device", 00:33:28.445 "type": "bdev", 00:33:28.445 "chunks": [ 00:33:28.445 { 00:33:28.445 "id": 0, 00:33:28.445 "state": "INACTIVE", 00:33:28.445 "utilization": 0.0 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 1, 00:33:28.445 "state": "CLOSED", 00:33:28.445 "utilization": 1.0 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 2, 00:33:28.445 "state": "CLOSED", 00:33:28.445 "utilization": 1.0 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 3, 00:33:28.445 "state": "OPEN", 00:33:28.445 "utilization": 0.001953125 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "id": 4, 00:33:28.445 "state": "OPEN", 00:33:28.445 "utilization": 0.0 00:33:28.445 } 00:33:28.445 ], 00:33:28.445 "read-only": true 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "name": "verbose_mode", 00:33:28.445 "value": true, 00:33:28.445 "unit": "", 00:33:28.445 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:28.445 }, 00:33:28.445 { 00:33:28.445 "name": "prep_upgrade_on_shutdown", 00:33:28.445 "value": false, 00:33:28.445 "unit": "", 00:33:28.445 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:28.445 } 00:33:28.445 ] 00:33:28.445 } 00:33:28.445 11:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:33:28.445 [2024-11-20 11:47:34.204200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.445 [2024-11-20 11:47:34.204405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:28.445 [2024-11-20 11:47:34.204522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:28.445 [2024-11-20 11:47:34.204562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.445 [2024-11-20 11:47:34.204628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.445 [2024-11-20 11:47:34.204663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:28.704 [2024-11-20 11:47:34.204754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:28.704 [2024-11-20 11:47:34.204791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.704 [2024-11-20 11:47:34.204840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.704 [2024-11-20 11:47:34.204873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:28.704 [2024-11-20 11:47:34.204948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:28.704 [2024-11-20 11:47:34.204983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.704 [2024-11-20 11:47:34.205142] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.923 ms, result 0 00:33:28.704 true 00:33:28.704 11:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:33:28.704 11:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:28.704 11:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:28.962 11:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:33:28.962 11:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:33:28.962 11:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:28.962 [2024-11-20 11:47:34.680580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.962 [2024-11-20 11:47:34.680787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:28.962 [2024-11-20 11:47:34.680877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:28.962 [2024-11-20 11:47:34.680914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.962 [2024-11-20 11:47:34.680973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.962 [2024-11-20 11:47:34.681006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:28.962 [2024-11-20 11:47:34.681046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:28.962 [2024-11-20 11:47:34.681156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.962 [2024-11-20 11:47:34.681211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:28.962 [2024-11-20 11:47:34.681244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:28.962 [2024-11-20 11:47:34.681276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:28.962 [2024-11-20 11:47:34.681306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:28.962 [2024-11-20 11:47:34.681443] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.844 ms, result 0 00:33:28.962 true 00:33:28.962 11:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:29.221 { 00:33:29.221 "name": "ftl", 00:33:29.221 "properties": [ 00:33:29.221 { 00:33:29.221 "name": "superblock_version", 00:33:29.221 "value": 5, 00:33:29.221 "read-only": true 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "name": "base_device", 00:33:29.221 "bands": [ 00:33:29.221 { 00:33:29.221 "id": 0, 00:33:29.221 "state": "FREE", 00:33:29.221 "validity": 0.0 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 1, 00:33:29.221 "state": "FREE", 00:33:29.221 "validity": 0.0 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 2, 00:33:29.221 "state": "FREE", 00:33:29.221 "validity": 0.0 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 3, 00:33:29.221 "state": "FREE", 00:33:29.221 "validity": 0.0 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 4, 00:33:29.221 "state": "FREE", 00:33:29.221 "validity": 0.0 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 5, 00:33:29.221 "state": "FREE", 00:33:29.221 "validity": 0.0 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 6, 00:33:29.221 "state": "FREE", 00:33:29.221 "validity": 0.0 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 7, 00:33:29.221 "state": "FREE", 00:33:29.221 "validity": 0.0 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 8, 00:33:29.221 "state": "FREE", 00:33:29.221 "validity": 0.0 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 9, 00:33:29.221 "state": "FREE", 00:33:29.221 "validity": 0.0 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 10, 00:33:29.221 "state": "FREE", 00:33:29.221 "validity": 0.0 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 11, 00:33:29.221 "state": "FREE", 00:33:29.221 "validity": 0.0 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 12, 00:33:29.221 "state": "FREE", 00:33:29.221 "validity": 0.0 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 13, 00:33:29.221 "state": "FREE", 00:33:29.221 "validity": 0.0 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 14, 00:33:29.221 "state": "FREE", 00:33:29.221 "validity": 0.0 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 15, 00:33:29.221 "state": "FREE", 00:33:29.221 "validity": 0.0 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 16, 00:33:29.221 "state": "FREE", 00:33:29.221 "validity": 0.0 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 17, 00:33:29.221 "state": "FREE", 00:33:29.221 "validity": 0.0 00:33:29.221 } 00:33:29.221 ], 00:33:29.221 "read-only": true 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "name": "cache_device", 00:33:29.221 "type": "bdev", 00:33:29.221 "chunks": [ 00:33:29.221 { 00:33:29.221 "id": 0, 00:33:29.221 "state": "INACTIVE", 00:33:29.221 "utilization": 0.0 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 1, 00:33:29.221 "state": "CLOSED", 00:33:29.221 "utilization": 1.0 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 2, 00:33:29.221 "state": "CLOSED", 00:33:29.221 "utilization": 1.0 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 3, 00:33:29.221 "state": "OPEN", 00:33:29.221 "utilization": 0.001953125 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "id": 4, 00:33:29.221 "state": "OPEN", 00:33:29.221 "utilization": 0.0 00:33:29.221 } 00:33:29.221 ], 00:33:29.221 "read-only": true 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "name": "verbose_mode", 00:33:29.221 "value": true, 00:33:29.221 "unit": "", 00:33:29.221 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:29.221 }, 00:33:29.221 { 00:33:29.221 "name": "prep_upgrade_on_shutdown", 00:33:29.221 "value": true, 00:33:29.221 "unit": "", 00:33:29.221 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:29.221 } 00:33:29.221 ] 00:33:29.221 } 00:33:29.221 11:47:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:33:29.221 11:47:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83706 ]] 00:33:29.221 11:47:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83706 00:33:29.221 11:47:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83706 ']' 00:33:29.221 11:47:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83706 00:33:29.480 11:47:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:29.480 11:47:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:29.480 11:47:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83706 00:33:29.480 11:47:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:29.480 killing process with pid 83706 00:33:29.480 11:47:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:29.480 11:47:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83706' 00:33:29.480 11:47:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83706 00:33:29.480 11:47:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83706 00:33:30.414 [2024-11-20 11:47:36.123446] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:30.414 [2024-11-20 11:47:36.142928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.414 [2024-11-20 11:47:36.142972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:30.414 [2024-11-20 11:47:36.142987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:30.414 [2024-11-20 11:47:36.143014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.414 [2024-11-20 11:47:36.143037] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:30.414 [2024-11-20 11:47:36.147332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.414 [2024-11-20 11:47:36.147360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:30.414 [2024-11-20 11:47:36.147372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.278 ms 00:33:30.414 [2024-11-20 11:47:36.147399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.537 [2024-11-20 11:47:43.392729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.537 [2024-11-20 11:47:43.392795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:38.537 [2024-11-20 11:47:43.392814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7245.263 ms 00:33:38.537 [2024-11-20 11:47:43.392825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.537 [2024-11-20 11:47:43.394150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.537 [2024-11-20 11:47:43.394181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:38.537 [2024-11-20 11:47:43.394195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.301 ms 00:33:38.537 [2024-11-20 11:47:43.394205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.537 [2024-11-20 11:47:43.395157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.537 [2024-11-20 11:47:43.395178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:38.537 [2024-11-20 11:47:43.395191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.921 ms 00:33:38.537 [2024-11-20 11:47:43.395201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.537 [2024-11-20 11:47:43.410482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.537 [2024-11-20 11:47:43.410525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:38.537 [2024-11-20 11:47:43.410539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.232 ms 00:33:38.537 [2024-11-20 11:47:43.410548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.537 [2024-11-20 11:47:43.420111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.537 [2024-11-20 11:47:43.420148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:38.537 [2024-11-20 11:47:43.420162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.528 ms 00:33:38.537 [2024-11-20 11:47:43.420172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.537 [2024-11-20 11:47:43.420249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.537 [2024-11-20 11:47:43.420261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:38.537 [2024-11-20 11:47:43.420272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:33:38.537 [2024-11-20 11:47:43.420287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.537 [2024-11-20 11:47:43.435075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.537 [2024-11-20 11:47:43.435106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:38.537 [2024-11-20 11:47:43.435119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.771 ms 00:33:38.537 [2024-11-20 11:47:43.435144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.537 [2024-11-20 11:47:43.449806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.537 [2024-11-20 11:47:43.449837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:38.537 [2024-11-20 11:47:43.449849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.627 ms 00:33:38.537 [2024-11-20 11:47:43.449858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.537 [2024-11-20 11:47:43.464070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.537 [2024-11-20 11:47:43.464099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:38.537 [2024-11-20 11:47:43.464112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.179 ms 00:33:38.537 [2024-11-20 11:47:43.464120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.537 [2024-11-20 11:47:43.478272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.537 [2024-11-20 11:47:43.478300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:38.537 [2024-11-20 11:47:43.478311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.080 ms 00:33:38.537 [2024-11-20 11:47:43.478320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.537 [2024-11-20 11:47:43.478352] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:38.537 [2024-11-20 11:47:43.478367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:38.537 [2024-11-20 11:47:43.478379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:38.537 [2024-11-20 11:47:43.478402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:38.537 [2024-11-20 11:47:43.478412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:38.537 [2024-11-20 11:47:43.478423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:38.537 [2024-11-20 11:47:43.478433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:38.537 [2024-11-20 11:47:43.478443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:38.537 [2024-11-20 11:47:43.478453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:38.537 [2024-11-20 11:47:43.478462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:38.537 [2024-11-20 11:47:43.478481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:38.537 [2024-11-20 11:47:43.478491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:38.537 [2024-11-20 11:47:43.478501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:38.537 [2024-11-20 11:47:43.478510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:38.537 [2024-11-20 11:47:43.478520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:38.537 [2024-11-20 11:47:43.478547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:38.537 [2024-11-20 11:47:43.478558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:38.537 [2024-11-20 11:47:43.478568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:38.537 [2024-11-20 11:47:43.478578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:38.537 [2024-11-20 11:47:43.478590] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:38.537 [2024-11-20 11:47:43.478600] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d65cc6e7-c7de-4cca-b53b-ed23fb5acccf 00:33:38.537 [2024-11-20 11:47:43.478610] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:38.537 [2024-11-20 11:47:43.478620] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:33:38.537 [2024-11-20 11:47:43.478629] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:33:38.537 [2024-11-20 11:47:43.478639] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:33:38.537 [2024-11-20 11:47:43.478648] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:38.537 [2024-11-20 11:47:43.478658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:38.537 [2024-11-20 11:47:43.478671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:38.537 [2024-11-20 11:47:43.478680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:38.538 [2024-11-20 11:47:43.478689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:38.538 [2024-11-20 11:47:43.478699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.538 [2024-11-20 11:47:43.478709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:38.538 [2024-11-20 11:47:43.478723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.348 ms 00:33:38.538 [2024-11-20 11:47:43.478733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.538 [2024-11-20 11:47:43.499210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.538 [2024-11-20 11:47:43.499237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:38.538 [2024-11-20 11:47:43.499250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.447 ms 00:33:38.538 [2024-11-20 11:47:43.499261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.538 [2024-11-20 11:47:43.499825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:38.538 [2024-11-20 11:47:43.499836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:38.538 [2024-11-20 11:47:43.499846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.536 ms 00:33:38.538 [2024-11-20 11:47:43.499856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.538 [2024-11-20 11:47:43.565979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.538 [2024-11-20 11:47:43.566012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:38.538 [2024-11-20 11:47:43.566024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.538 [2024-11-20 11:47:43.566039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.538 [2024-11-20 11:47:43.566072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.538 [2024-11-20 11:47:43.566082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:38.538 [2024-11-20 11:47:43.566093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.538 [2024-11-20 11:47:43.566102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.538 [2024-11-20 11:47:43.566174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.538 [2024-11-20 11:47:43.566187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:38.538 [2024-11-20 11:47:43.566214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.538 [2024-11-20 11:47:43.566224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.538 [2024-11-20 11:47:43.566247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.538 [2024-11-20 11:47:43.566257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:38.538 [2024-11-20 11:47:43.566268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.538 [2024-11-20 11:47:43.566278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.538 [2024-11-20 11:47:43.689306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.538 [2024-11-20 11:47:43.689362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:38.538 [2024-11-20 11:47:43.689378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.538 [2024-11-20 11:47:43.689389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.538 [2024-11-20 11:47:43.788662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.538 [2024-11-20 11:47:43.788705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:38.538 [2024-11-20 11:47:43.788720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.538 [2024-11-20 11:47:43.788731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.538 [2024-11-20 11:47:43.788867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.538 [2024-11-20 11:47:43.788882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:38.538 [2024-11-20 11:47:43.788893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.538 [2024-11-20 11:47:43.788904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.538 [2024-11-20 11:47:43.788951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.538 [2024-11-20 11:47:43.788967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:38.538 [2024-11-20 11:47:43.788978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.538 [2024-11-20 11:47:43.788989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.538 [2024-11-20 11:47:43.789117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.538 [2024-11-20 11:47:43.789130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:38.538 [2024-11-20 11:47:43.789141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.538 [2024-11-20 11:47:43.789152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.538 [2024-11-20 11:47:43.789188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.538 [2024-11-20 11:47:43.789200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:38.538 [2024-11-20 11:47:43.789215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.538 [2024-11-20 11:47:43.789227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.538 [2024-11-20 11:47:43.789282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.538 [2024-11-20 11:47:43.789299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:38.538 [2024-11-20 11:47:43.789316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.538 [2024-11-20 11:47:43.789329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.538 [2024-11-20 11:47:43.789372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:38.538 [2024-11-20 11:47:43.789388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:38.538 [2024-11-20 11:47:43.789399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:38.538 [2024-11-20 11:47:43.789409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:38.538 [2024-11-20 11:47:43.789556] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7646.562 ms, result 0 00:33:41.829 11:47:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:41.829 11:47:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:33:41.829 11:47:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:41.829 11:47:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:41.829 11:47:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:41.829 11:47:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84298 00:33:41.829 11:47:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:41.829 11:47:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:41.829 11:47:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84298 00:33:41.829 11:47:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84298 ']' 00:33:41.829 11:47:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:41.829 11:47:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:41.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:41.829 11:47:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:41.829 11:47:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:41.829 11:47:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:41.829 [2024-11-20 11:47:47.122818] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:33:41.829 [2024-11-20 11:47:47.122955] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84298 ] 00:33:41.829 [2024-11-20 11:47:47.296310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.829 [2024-11-20 11:47:47.406397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.765 [2024-11-20 11:47:48.373541] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:42.765 [2024-11-20 11:47:48.373616] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:42.765 [2024-11-20 11:47:48.520816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:42.765 [2024-11-20 11:47:48.520863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:42.765 [2024-11-20 11:47:48.520877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:42.765 [2024-11-20 11:47:48.520904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.765 [2024-11-20 11:47:48.520956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:42.765 [2024-11-20 11:47:48.520969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:42.765 [2024-11-20 11:47:48.521004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:33:42.765 [2024-11-20 11:47:48.521015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.766 [2024-11-20 11:47:48.521056] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:42.766 [2024-11-20 11:47:48.522102] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:42.766 [2024-11-20 11:47:48.522137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:42.766 [2024-11-20 11:47:48.522148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:42.766 [2024-11-20 11:47:48.522159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.092 ms 00:33:42.766 [2024-11-20 11:47:48.522169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:42.766 [2024-11-20 11:47:48.523732] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:43.025 [2024-11-20 11:47:48.543493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:43.025 [2024-11-20 11:47:48.543534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:43.025 [2024-11-20 11:47:48.543563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.762 ms 00:33:43.025 [2024-11-20 11:47:48.543575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:43.025 [2024-11-20 11:47:48.543643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:43.025 [2024-11-20 11:47:48.543656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:43.025 [2024-11-20 11:47:48.543668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:33:43.025 [2024-11-20 11:47:48.543678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:43.025 [2024-11-20 11:47:48.550830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:43.025 [2024-11-20 11:47:48.550863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:43.025 [2024-11-20 11:47:48.550875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.065 ms 00:33:43.025 [2024-11-20 11:47:48.550900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:43.025 [2024-11-20 11:47:48.550969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:43.025 [2024-11-20 11:47:48.550982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:43.025 [2024-11-20 11:47:48.550994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:33:43.025 [2024-11-20 11:47:48.551004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:43.025 [2024-11-20 11:47:48.551051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:43.025 [2024-11-20 11:47:48.551062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:43.025 [2024-11-20 11:47:48.551077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:33:43.025 [2024-11-20 11:47:48.551087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:43.025 [2024-11-20 11:47:48.551115] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:43.025 [2024-11-20 11:47:48.556063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:43.025 [2024-11-20 11:47:48.556095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:43.025 [2024-11-20 11:47:48.556123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.954 ms 00:33:43.025 [2024-11-20 11:47:48.556137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:43.025 [2024-11-20 11:47:48.556165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:43.025 [2024-11-20 11:47:48.556177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:43.025 [2024-11-20 11:47:48.556187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:43.025 [2024-11-20 11:47:48.556197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:43.025 [2024-11-20 11:47:48.556255] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:43.025 [2024-11-20 11:47:48.556278] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:43.025 [2024-11-20 11:47:48.556316] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:43.025 [2024-11-20 11:47:48.556333] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:43.025 [2024-11-20 11:47:48.556422] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:43.025 [2024-11-20 11:47:48.556435] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:43.025 [2024-11-20 11:47:48.556448] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:43.026 [2024-11-20 11:47:48.556477] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:43.026 [2024-11-20 11:47:48.556490] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:43.026 [2024-11-20 11:47:48.556520] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:43.026 [2024-11-20 11:47:48.556532] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:43.026 [2024-11-20 11:47:48.556541] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:43.026 [2024-11-20 11:47:48.556551] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:43.026 [2024-11-20 11:47:48.556562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:43.026 [2024-11-20 11:47:48.556573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:43.026 [2024-11-20 11:47:48.556583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.311 ms 00:33:43.026 [2024-11-20 11:47:48.556593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:43.026 [2024-11-20 11:47:48.556669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:43.026 [2024-11-20 11:47:48.556680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:43.026 [2024-11-20 11:47:48.556691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:33:43.026 [2024-11-20 11:47:48.556705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:43.026 [2024-11-20 11:47:48.556798] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:43.026 [2024-11-20 11:47:48.556811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:43.026 [2024-11-20 11:47:48.556822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:43.026 [2024-11-20 11:47:48.556832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:43.026 [2024-11-20 11:47:48.556842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:43.026 [2024-11-20 11:47:48.556852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:43.026 [2024-11-20 11:47:48.556861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:43.026 [2024-11-20 11:47:48.556871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:43.026 [2024-11-20 11:47:48.556881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:43.026 [2024-11-20 11:47:48.556890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:43.026 [2024-11-20 11:47:48.556899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:43.026 [2024-11-20 11:47:48.556913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:43.026 [2024-11-20 11:47:48.556922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:43.026 [2024-11-20 11:47:48.556931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:43.026 [2024-11-20 11:47:48.556940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:43.026 [2024-11-20 11:47:48.556950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:43.026 [2024-11-20 11:47:48.556959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:43.026 [2024-11-20 11:47:48.556969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:43.026 [2024-11-20 11:47:48.556978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:43.026 [2024-11-20 11:47:48.556988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:43.026 [2024-11-20 11:47:48.556997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:43.026 [2024-11-20 11:47:48.557006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:43.026 [2024-11-20 11:47:48.557015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:43.026 [2024-11-20 11:47:48.557024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:43.026 [2024-11-20 11:47:48.557044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:43.026 [2024-11-20 11:47:48.557065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:43.026 [2024-11-20 11:47:48.557074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:43.026 [2024-11-20 11:47:48.557084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:43.026 [2024-11-20 11:47:48.557093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:43.026 [2024-11-20 11:47:48.557102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:43.026 [2024-11-20 11:47:48.557111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:43.026 [2024-11-20 11:47:48.557121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:43.026 [2024-11-20 11:47:48.557130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:43.026 [2024-11-20 11:47:48.557140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:43.026 [2024-11-20 11:47:48.557149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:43.026 [2024-11-20 11:47:48.557159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:43.026 [2024-11-20 11:47:48.557168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:43.026 [2024-11-20 11:47:48.557177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:43.026 [2024-11-20 11:47:48.557187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:43.026 [2024-11-20 11:47:48.557196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:43.026 [2024-11-20 11:47:48.557205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:43.026 [2024-11-20 11:47:48.557214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:43.026 [2024-11-20 11:47:48.557223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:43.026 [2024-11-20 11:47:48.557233] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:43.026 [2024-11-20 11:47:48.557244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:43.026 [2024-11-20 11:47:48.557253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:43.026 [2024-11-20 11:47:48.557263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:43.026 [2024-11-20 11:47:48.557278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:43.026 [2024-11-20 11:47:48.557287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:43.026 [2024-11-20 11:47:48.557297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:43.026 [2024-11-20 11:47:48.557306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:43.026 [2024-11-20 11:47:48.557315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:43.026 [2024-11-20 11:47:48.557325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:43.026 [2024-11-20 11:47:48.557336] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:43.026 [2024-11-20 11:47:48.557347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:43.026 [2024-11-20 11:47:48.557359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:43.026 [2024-11-20 11:47:48.557369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:43.026 [2024-11-20 11:47:48.557381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:43.026 [2024-11-20 11:47:48.557391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:43.026 [2024-11-20 11:47:48.557402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:43.026 [2024-11-20 11:47:48.557412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:43.026 [2024-11-20 11:47:48.557422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:43.026 [2024-11-20 11:47:48.557432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:43.026 [2024-11-20 11:47:48.557443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:43.026 [2024-11-20 11:47:48.557454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:43.026 [2024-11-20 11:47:48.557464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:43.026 [2024-11-20 11:47:48.557485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:43.026 [2024-11-20 11:47:48.557496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:43.026 [2024-11-20 11:47:48.557507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:43.026 [2024-11-20 11:47:48.557518] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:43.026 [2024-11-20 11:47:48.557529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:43.026 [2024-11-20 11:47:48.557541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:43.026 [2024-11-20 11:47:48.557552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:43.026 [2024-11-20 11:47:48.557562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:43.026 [2024-11-20 11:47:48.557573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:43.026 [2024-11-20 11:47:48.557584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:43.026 [2024-11-20 11:47:48.557594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:43.026 [2024-11-20 11:47:48.557605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.841 ms 00:33:43.026 [2024-11-20 11:47:48.557615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:43.026 [2024-11-20 11:47:48.557663] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:33:43.026 [2024-11-20 11:47:48.557677] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:33:46.315 [2024-11-20 11:47:51.405912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.315 [2024-11-20 11:47:51.405987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:33:46.315 [2024-11-20 11:47:51.406013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2848.230 ms 00:33:46.315 [2024-11-20 11:47:51.406031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.315 [2024-11-20 11:47:51.461182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.315 [2024-11-20 11:47:51.461247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:46.315 [2024-11-20 11:47:51.461271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 54.650 ms 00:33:46.315 [2024-11-20 11:47:51.461288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.315 [2024-11-20 11:47:51.461450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.315 [2024-11-20 11:47:51.461493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:46.315 [2024-11-20 11:47:51.461512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:33:46.315 [2024-11-20 11:47:51.461527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.315 [2024-11-20 11:47:51.539094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.315 [2024-11-20 11:47:51.539154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:46.315 [2024-11-20 11:47:51.539176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 77.467 ms 00:33:46.315 [2024-11-20 11:47:51.539198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.315 [2024-11-20 11:47:51.539267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.315 [2024-11-20 11:47:51.539285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:46.315 [2024-11-20 11:47:51.539302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:46.315 [2024-11-20 11:47:51.539318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.315 [2024-11-20 11:47:51.539908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.315 [2024-11-20 11:47:51.539940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:46.315 [2024-11-20 11:47:51.539957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.499 ms 00:33:46.315 [2024-11-20 11:47:51.539974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.315 [2024-11-20 11:47:51.540045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.315 [2024-11-20 11:47:51.540063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:46.315 [2024-11-20 11:47:51.540080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:33:46.315 [2024-11-20 11:47:51.540096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.315 [2024-11-20 11:47:51.562258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.315 [2024-11-20 11:47:51.562301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:46.315 [2024-11-20 11:47:51.562315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.126 ms 00:33:46.315 [2024-11-20 11:47:51.562327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.315 [2024-11-20 11:47:51.582074] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:33:46.315 [2024-11-20 11:47:51.582116] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:46.315 [2024-11-20 11:47:51.582132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.315 [2024-11-20 11:47:51.582143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:33:46.315 [2024-11-20 11:47:51.582155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.642 ms 00:33:46.315 [2024-11-20 11:47:51.582165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.315 [2024-11-20 11:47:51.602550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.315 [2024-11-20 11:47:51.602589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:33:46.315 [2024-11-20 11:47:51.602619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.337 ms 00:33:46.315 [2024-11-20 11:47:51.602630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.315 [2024-11-20 11:47:51.620349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.315 [2024-11-20 11:47:51.620383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:33:46.315 [2024-11-20 11:47:51.620395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.670 ms 00:33:46.315 [2024-11-20 11:47:51.620404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.315 [2024-11-20 11:47:51.638112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.315 [2024-11-20 11:47:51.638147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:33:46.315 [2024-11-20 11:47:51.638160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.650 ms 00:33:46.315 [2024-11-20 11:47:51.638169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.315 [2024-11-20 11:47:51.639014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.315 [2024-11-20 11:47:51.639053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:46.315 [2024-11-20 11:47:51.639066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.720 ms 00:33:46.315 [2024-11-20 11:47:51.639076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.315 [2024-11-20 11:47:51.736733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.315 [2024-11-20 11:47:51.736804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:46.315 [2024-11-20 11:47:51.736821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 97.630 ms 00:33:46.315 [2024-11-20 11:47:51.736833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.315 [2024-11-20 11:47:51.748461] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:46.315 [2024-11-20 11:47:51.749424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.315 [2024-11-20 11:47:51.749453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:46.315 [2024-11-20 11:47:51.749467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.531 ms 00:33:46.316 [2024-11-20 11:47:51.749489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.316 [2024-11-20 11:47:51.749583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.316 [2024-11-20 11:47:51.749601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:33:46.316 [2024-11-20 11:47:51.749612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:46.316 [2024-11-20 11:47:51.749623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.316 [2024-11-20 11:47:51.749685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.316 [2024-11-20 11:47:51.749698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:46.316 [2024-11-20 11:47:51.749709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:33:46.316 [2024-11-20 11:47:51.749719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.316 [2024-11-20 11:47:51.749742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.316 [2024-11-20 11:47:51.749753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:46.316 [2024-11-20 11:47:51.749764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:33:46.316 [2024-11-20 11:47:51.749777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.316 [2024-11-20 11:47:51.749814] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:46.316 [2024-11-20 11:47:51.749826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.316 [2024-11-20 11:47:51.749836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:46.316 [2024-11-20 11:47:51.749847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:33:46.316 [2024-11-20 11:47:51.749857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.316 [2024-11-20 11:47:51.786927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.316 [2024-11-20 11:47:51.786973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:33:46.316 [2024-11-20 11:47:51.787003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.048 ms 00:33:46.316 [2024-11-20 11:47:51.787014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.316 [2024-11-20 11:47:51.787094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.316 [2024-11-20 11:47:51.787107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:46.316 [2024-11-20 11:47:51.787118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:33:46.316 [2024-11-20 11:47:51.787129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.316 [2024-11-20 11:47:51.788313] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3267.018 ms, result 0 00:33:46.316 [2024-11-20 11:47:51.803291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:46.316 [2024-11-20 11:47:51.819294] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:46.316 [2024-11-20 11:47:51.828169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:46.316 11:47:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:46.316 11:47:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:46.316 11:47:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:46.316 11:47:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:46.316 11:47:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:46.575 [2024-11-20 11:47:52.144318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.575 [2024-11-20 11:47:52.144365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:46.575 [2024-11-20 11:47:52.144381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:33:46.575 [2024-11-20 11:47:52.144395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.575 [2024-11-20 11:47:52.144422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.575 [2024-11-20 11:47:52.144434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:46.575 [2024-11-20 11:47:52.144445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:46.575 [2024-11-20 11:47:52.144454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.575 [2024-11-20 11:47:52.144487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.575 [2024-11-20 11:47:52.144499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:46.575 [2024-11-20 11:47:52.144525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:46.575 [2024-11-20 11:47:52.144536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.575 [2024-11-20 11:47:52.144599] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.271 ms, result 0 00:33:46.575 true 00:33:46.575 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:46.834 { 00:33:46.834 "name": "ftl", 00:33:46.834 "properties": [ 00:33:46.834 { 00:33:46.834 "name": "superblock_version", 00:33:46.834 "value": 5, 00:33:46.834 "read-only": true 00:33:46.834 }, 00:33:46.834 { 00:33:46.834 "name": "base_device", 00:33:46.834 "bands": [ 00:33:46.834 { 00:33:46.834 "id": 0, 00:33:46.834 "state": "CLOSED", 00:33:46.834 "validity": 1.0 00:33:46.834 }, 00:33:46.834 { 00:33:46.834 "id": 1, 00:33:46.834 "state": "CLOSED", 00:33:46.834 "validity": 1.0 00:33:46.834 }, 00:33:46.834 { 00:33:46.834 "id": 2, 00:33:46.834 "state": "CLOSED", 00:33:46.834 "validity": 0.007843137254901933 00:33:46.834 }, 00:33:46.834 { 00:33:46.834 "id": 3, 00:33:46.834 "state": "FREE", 00:33:46.834 "validity": 0.0 00:33:46.834 }, 00:33:46.834 { 00:33:46.834 "id": 4, 00:33:46.834 "state": "FREE", 00:33:46.834 "validity": 0.0 00:33:46.834 }, 00:33:46.834 { 00:33:46.834 "id": 5, 00:33:46.834 "state": "FREE", 00:33:46.834 "validity": 0.0 00:33:46.834 }, 00:33:46.834 { 00:33:46.834 "id": 6, 00:33:46.834 "state": "FREE", 00:33:46.834 "validity": 0.0 00:33:46.834 }, 00:33:46.834 { 00:33:46.834 "id": 7, 00:33:46.834 "state": "FREE", 00:33:46.834 "validity": 0.0 00:33:46.834 }, 00:33:46.834 { 00:33:46.834 "id": 8, 00:33:46.834 "state": "FREE", 00:33:46.834 "validity": 0.0 00:33:46.834 }, 00:33:46.834 { 00:33:46.834 "id": 9, 00:33:46.834 "state": "FREE", 00:33:46.834 "validity": 0.0 00:33:46.834 }, 00:33:46.834 { 00:33:46.834 "id": 10, 00:33:46.834 "state": "FREE", 00:33:46.834 "validity": 0.0 00:33:46.834 }, 00:33:46.834 { 00:33:46.834 "id": 11, 00:33:46.834 "state": "FREE", 00:33:46.834 "validity": 0.0 00:33:46.834 }, 00:33:46.834 { 00:33:46.834 "id": 12, 00:33:46.834 "state": "FREE", 00:33:46.834 "validity": 0.0 00:33:46.834 }, 00:33:46.834 { 00:33:46.834 "id": 13, 00:33:46.834 "state": "FREE", 00:33:46.834 "validity": 0.0 00:33:46.834 }, 00:33:46.834 { 00:33:46.834 "id": 14, 00:33:46.834 "state": "FREE", 00:33:46.834 "validity": 0.0 00:33:46.834 }, 00:33:46.834 { 00:33:46.834 "id": 15, 00:33:46.834 "state": "FREE", 00:33:46.834 "validity": 0.0 00:33:46.834 }, 00:33:46.834 { 00:33:46.834 "id": 16, 00:33:46.834 "state": "FREE", 00:33:46.834 "validity": 0.0 00:33:46.834 }, 00:33:46.834 { 00:33:46.834 "id": 17, 00:33:46.834 "state": "FREE", 00:33:46.834 "validity": 0.0 00:33:46.834 } 00:33:46.834 ], 00:33:46.834 "read-only": true 00:33:46.834 }, 00:33:46.834 { 00:33:46.834 "name": "cache_device", 00:33:46.834 "type": "bdev", 00:33:46.834 "chunks": [ 00:33:46.834 { 00:33:46.834 "id": 0, 00:33:46.834 "state": "INACTIVE", 00:33:46.834 "utilization": 0.0 00:33:46.834 }, 00:33:46.834 { 00:33:46.834 "id": 1, 00:33:46.835 "state": "OPEN", 00:33:46.835 "utilization": 0.0 00:33:46.835 }, 00:33:46.835 { 00:33:46.835 "id": 2, 00:33:46.835 "state": "OPEN", 00:33:46.835 "utilization": 0.0 00:33:46.835 }, 00:33:46.835 { 00:33:46.835 "id": 3, 00:33:46.835 "state": "FREE", 00:33:46.835 "utilization": 0.0 00:33:46.835 }, 00:33:46.835 { 00:33:46.835 "id": 4, 00:33:46.835 "state": "FREE", 00:33:46.835 "utilization": 0.0 00:33:46.835 } 00:33:46.835 ], 00:33:46.835 "read-only": true 00:33:46.835 }, 00:33:46.835 { 00:33:46.835 "name": "verbose_mode", 00:33:46.835 "value": true, 00:33:46.835 "unit": "", 00:33:46.835 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:46.835 }, 00:33:46.835 { 00:33:46.835 "name": "prep_upgrade_on_shutdown", 00:33:46.835 "value": false, 00:33:46.835 "unit": "", 00:33:46.835 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:46.835 } 00:33:46.835 ] 00:33:46.835 } 00:33:46.835 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:46.835 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:33:46.835 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:47.094 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:33:47.094 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:33:47.094 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:33:47.094 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:33:47.094 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:47.353 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:33:47.353 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:33:47.353 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:33:47.353 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:47.353 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:47.353 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:47.353 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:47.353 Validate MD5 checksum, iteration 1 00:33:47.353 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:47.353 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:47.353 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:47.353 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:47.353 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:47.353 11:47:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:47.353 [2024-11-20 11:47:52.971615] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:33:47.353 [2024-11-20 11:47:52.971750] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84392 ] 00:33:47.612 [2024-11-20 11:47:53.152185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.612 [2024-11-20 11:47:53.312426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:49.515  [2024-11-20T11:47:55.843Z] Copying: 650/1024 [MB] (650 MBps) [2024-11-20T11:47:57.221Z] Copying: 1024/1024 [MB] (average 650 MBps) 00:33:51.459 00:33:51.459 11:47:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:51.459 11:47:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:53.362 11:47:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:53.362 11:47:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=cb854ce6b2442b029dc9c58a3bebaf87 00:33:53.363 11:47:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ cb854ce6b2442b029dc9c58a3bebaf87 != \c\b\8\5\4\c\e\6\b\2\4\4\2\b\0\2\9\d\c\9\c\5\8\a\3\b\e\b\a\f\8\7 ]] 00:33:53.363 Validate MD5 checksum, iteration 2 00:33:53.363 11:47:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:53.363 11:47:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:53.363 11:47:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:53.363 11:47:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:53.363 11:47:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:53.363 11:47:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:53.363 11:47:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:53.363 11:47:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:53.363 11:47:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:53.363 [2024-11-20 11:47:59.039284] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:33:53.363 [2024-11-20 11:47:59.039451] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84456 ] 00:33:53.632 [2024-11-20 11:47:59.235325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:53.900 [2024-11-20 11:47:59.398200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:55.801  [2024-11-20T11:48:02.128Z] Copying: 607/1024 [MB] (607 MBps) [2024-11-20T11:48:04.029Z] Copying: 1024/1024 [MB] (average 583 MBps) 00:33:58.267 00:33:58.267 11:48:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:58.267 11:48:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:00.245 11:48:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:00.245 11:48:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d01297a2c0237a5efe56cf76adadca85 00:34:00.245 11:48:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d01297a2c0237a5efe56cf76adadca85 != \d\0\1\2\9\7\a\2\c\0\2\3\7\a\5\e\f\e\5\6\c\f\7\6\a\d\a\d\c\a\8\5 ]] 00:34:00.245 11:48:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:00.245 11:48:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:00.245 11:48:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:34:00.245 11:48:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84298 ]] 00:34:00.245 11:48:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84298 00:34:00.245 11:48:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:34:00.245 11:48:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:34:00.245 11:48:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:34:00.245 11:48:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:34:00.245 11:48:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:00.245 11:48:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84531 00:34:00.245 11:48:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:34:00.245 11:48:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84531 00:34:00.245 11:48:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84531 ']' 00:34:00.245 11:48:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:00.246 11:48:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:00.246 11:48:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:00.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:00.246 11:48:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:00.246 11:48:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:00.246 11:48:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:00.246 [2024-11-20 11:48:05.649467] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:34:00.246 [2024-11-20 11:48:05.649676] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84531 ] 00:34:00.246 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84298 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:34:00.246 [2024-11-20 11:48:05.828508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.246 [2024-11-20 11:48:05.940130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.181 [2024-11-20 11:48:06.895301] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:34:01.181 [2024-11-20 11:48:06.895381] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:34:01.442 [2024-11-20 11:48:07.041969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.442 [2024-11-20 11:48:07.042015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:34:01.442 [2024-11-20 11:48:07.042031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:34:01.442 [2024-11-20 11:48:07.042042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.442 [2024-11-20 11:48:07.042096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.442 [2024-11-20 11:48:07.042109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:01.442 [2024-11-20 11:48:07.042120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:34:01.442 [2024-11-20 11:48:07.042130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.442 [2024-11-20 11:48:07.042161] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:34:01.442 [2024-11-20 11:48:07.043260] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:34:01.442 [2024-11-20 11:48:07.043289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.442 [2024-11-20 11:48:07.043301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:01.442 [2024-11-20 11:48:07.043312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.139 ms 00:34:01.442 [2024-11-20 11:48:07.043322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.442 [2024-11-20 11:48:07.043696] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:34:01.442 [2024-11-20 11:48:07.068554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.442 [2024-11-20 11:48:07.068593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:34:01.442 [2024-11-20 11:48:07.068608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.858 ms 00:34:01.442 [2024-11-20 11:48:07.068619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.442 [2024-11-20 11:48:07.083790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.442 [2024-11-20 11:48:07.083822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:34:01.442 [2024-11-20 11:48:07.083839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:34:01.442 [2024-11-20 11:48:07.083849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.442 [2024-11-20 11:48:07.084348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.442 [2024-11-20 11:48:07.084363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:01.442 [2024-11-20 11:48:07.084374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.409 ms 00:34:01.442 [2024-11-20 11:48:07.084385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.442 [2024-11-20 11:48:07.084444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.442 [2024-11-20 11:48:07.084460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:01.442 [2024-11-20 11:48:07.084491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:34:01.442 [2024-11-20 11:48:07.084503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.442 [2024-11-20 11:48:07.084531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.442 [2024-11-20 11:48:07.084543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:34:01.442 [2024-11-20 11:48:07.084553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:34:01.442 [2024-11-20 11:48:07.084563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.442 [2024-11-20 11:48:07.084589] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:34:01.442 [2024-11-20 11:48:07.088770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.443 [2024-11-20 11:48:07.088797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:01.443 [2024-11-20 11:48:07.088809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.186 ms 00:34:01.443 [2024-11-20 11:48:07.088820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.443 [2024-11-20 11:48:07.088852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.443 [2024-11-20 11:48:07.088863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:34:01.443 [2024-11-20 11:48:07.088874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:34:01.443 [2024-11-20 11:48:07.088884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.443 [2024-11-20 11:48:07.088925] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:34:01.443 [2024-11-20 11:48:07.088949] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:34:01.443 [2024-11-20 11:48:07.088985] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:34:01.443 [2024-11-20 11:48:07.089007] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:34:01.443 [2024-11-20 11:48:07.089108] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:34:01.443 [2024-11-20 11:48:07.089122] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:34:01.443 [2024-11-20 11:48:07.089135] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:34:01.443 [2024-11-20 11:48:07.089148] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:34:01.443 [2024-11-20 11:48:07.089161] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:34:01.443 [2024-11-20 11:48:07.089172] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:34:01.443 [2024-11-20 11:48:07.089182] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:34:01.443 [2024-11-20 11:48:07.089191] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:34:01.443 [2024-11-20 11:48:07.089201] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:34:01.443 [2024-11-20 11:48:07.089212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.443 [2024-11-20 11:48:07.089225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:34:01.443 [2024-11-20 11:48:07.089236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.289 ms 00:34:01.443 [2024-11-20 11:48:07.089246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.443 [2024-11-20 11:48:07.089320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.443 [2024-11-20 11:48:07.089331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:34:01.443 [2024-11-20 11:48:07.089341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:34:01.443 [2024-11-20 11:48:07.089351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.443 [2024-11-20 11:48:07.089442] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:34:01.443 [2024-11-20 11:48:07.089454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:34:01.443 [2024-11-20 11:48:07.089468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:01.443 [2024-11-20 11:48:07.089494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:01.443 [2024-11-20 11:48:07.089505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:34:01.443 [2024-11-20 11:48:07.089515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:34:01.443 [2024-11-20 11:48:07.089524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:34:01.443 [2024-11-20 11:48:07.089534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:34:01.443 [2024-11-20 11:48:07.089545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:34:01.443 [2024-11-20 11:48:07.089554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:01.443 [2024-11-20 11:48:07.089564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:34:01.443 [2024-11-20 11:48:07.089573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:34:01.443 [2024-11-20 11:48:07.089582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:01.443 [2024-11-20 11:48:07.089591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:34:01.443 [2024-11-20 11:48:07.089601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:34:01.443 [2024-11-20 11:48:07.089610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:01.443 [2024-11-20 11:48:07.089619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:34:01.443 [2024-11-20 11:48:07.089628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:34:01.443 [2024-11-20 11:48:07.089637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:01.443 [2024-11-20 11:48:07.089646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:34:01.443 [2024-11-20 11:48:07.089656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:34:01.443 [2024-11-20 11:48:07.089665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:01.443 [2024-11-20 11:48:07.089674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:34:01.443 [2024-11-20 11:48:07.089694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:34:01.443 [2024-11-20 11:48:07.089703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:01.443 [2024-11-20 11:48:07.089713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:34:01.443 [2024-11-20 11:48:07.089722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:34:01.443 [2024-11-20 11:48:07.089732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:01.443 [2024-11-20 11:48:07.089741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:34:01.443 [2024-11-20 11:48:07.089750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:34:01.443 [2024-11-20 11:48:07.089760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:01.443 [2024-11-20 11:48:07.089769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:34:01.443 [2024-11-20 11:48:07.089778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:34:01.443 [2024-11-20 11:48:07.089788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:01.443 [2024-11-20 11:48:07.089797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:34:01.443 [2024-11-20 11:48:07.089806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:34:01.443 [2024-11-20 11:48:07.089819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:01.443 [2024-11-20 11:48:07.089828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:34:01.443 [2024-11-20 11:48:07.089838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:34:01.443 [2024-11-20 11:48:07.089847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:01.443 [2024-11-20 11:48:07.089856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:34:01.443 [2024-11-20 11:48:07.089866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:34:01.443 [2024-11-20 11:48:07.089875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:01.443 [2024-11-20 11:48:07.089884] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:34:01.443 [2024-11-20 11:48:07.089896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:34:01.443 [2024-11-20 11:48:07.089906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:01.443 [2024-11-20 11:48:07.089916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:01.443 [2024-11-20 11:48:07.089926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:34:01.443 [2024-11-20 11:48:07.089936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:34:01.443 [2024-11-20 11:48:07.089946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:34:01.443 [2024-11-20 11:48:07.089955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:34:01.443 [2024-11-20 11:48:07.089965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:34:01.443 [2024-11-20 11:48:07.089974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:34:01.443 [2024-11-20 11:48:07.089984] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:34:01.443 [2024-11-20 11:48:07.089997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:01.443 [2024-11-20 11:48:07.090008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:34:01.443 [2024-11-20 11:48:07.090019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:34:01.443 [2024-11-20 11:48:07.090030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:34:01.443 [2024-11-20 11:48:07.090041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:34:01.443 [2024-11-20 11:48:07.090052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:34:01.443 [2024-11-20 11:48:07.090062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:34:01.443 [2024-11-20 11:48:07.090073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:34:01.443 [2024-11-20 11:48:07.090084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:34:01.443 [2024-11-20 11:48:07.090094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:34:01.443 [2024-11-20 11:48:07.090105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:34:01.443 [2024-11-20 11:48:07.090115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:34:01.443 [2024-11-20 11:48:07.090125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:34:01.443 [2024-11-20 11:48:07.090136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:34:01.444 [2024-11-20 11:48:07.090148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:34:01.444 [2024-11-20 11:48:07.090158] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:34:01.444 [2024-11-20 11:48:07.090169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:01.444 [2024-11-20 11:48:07.090180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:01.444 [2024-11-20 11:48:07.090191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:34:01.444 [2024-11-20 11:48:07.090202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:34:01.444 [2024-11-20 11:48:07.090212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:34:01.444 [2024-11-20 11:48:07.090223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.444 [2024-11-20 11:48:07.090237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:34:01.444 [2024-11-20 11:48:07.090247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.836 ms 00:34:01.444 [2024-11-20 11:48:07.090257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.444 [2024-11-20 11:48:07.126673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.444 [2024-11-20 11:48:07.126717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:01.444 [2024-11-20 11:48:07.126732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.359 ms 00:34:01.444 [2024-11-20 11:48:07.126743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.444 [2024-11-20 11:48:07.126800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.444 [2024-11-20 11:48:07.126812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:34:01.444 [2024-11-20 11:48:07.126823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:34:01.444 [2024-11-20 11:48:07.126833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.444 [2024-11-20 11:48:07.174477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.444 [2024-11-20 11:48:07.174539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:01.444 [2024-11-20 11:48:07.174553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.563 ms 00:34:01.444 [2024-11-20 11:48:07.174564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.444 [2024-11-20 11:48:07.174617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.444 [2024-11-20 11:48:07.174628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:01.444 [2024-11-20 11:48:07.174640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:34:01.444 [2024-11-20 11:48:07.174650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.444 [2024-11-20 11:48:07.174795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.444 [2024-11-20 11:48:07.174809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:01.444 [2024-11-20 11:48:07.174820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:34:01.444 [2024-11-20 11:48:07.174830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.444 [2024-11-20 11:48:07.174888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.444 [2024-11-20 11:48:07.174900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:01.444 [2024-11-20 11:48:07.174910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:34:01.444 [2024-11-20 11:48:07.174920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.444 [2024-11-20 11:48:07.196071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.444 [2024-11-20 11:48:07.196110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:01.444 [2024-11-20 11:48:07.196125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.122 ms 00:34:01.444 [2024-11-20 11:48:07.196136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.444 [2024-11-20 11:48:07.196286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.444 [2024-11-20 11:48:07.196302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:34:01.444 [2024-11-20 11:48:07.196313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:34:01.444 [2024-11-20 11:48:07.196324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.704 [2024-11-20 11:48:07.229542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.704 [2024-11-20 11:48:07.229582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:34:01.704 [2024-11-20 11:48:07.229596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.195 ms 00:34:01.704 [2024-11-20 11:48:07.229608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.704 [2024-11-20 11:48:07.244742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.704 [2024-11-20 11:48:07.244776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:34:01.704 [2024-11-20 11:48:07.244812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.766 ms 00:34:01.704 [2024-11-20 11:48:07.244822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.704 [2024-11-20 11:48:07.334170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.704 [2024-11-20 11:48:07.334225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:34:01.704 [2024-11-20 11:48:07.334249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 89.278 ms 00:34:01.704 [2024-11-20 11:48:07.334260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.704 [2024-11-20 11:48:07.334485] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:34:01.704 [2024-11-20 11:48:07.334622] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:34:01.704 [2024-11-20 11:48:07.334746] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:34:01.704 [2024-11-20 11:48:07.334871] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:34:01.704 [2024-11-20 11:48:07.334884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.704 [2024-11-20 11:48:07.334895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:34:01.704 [2024-11-20 11:48:07.334908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.544 ms 00:34:01.704 [2024-11-20 11:48:07.334918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.704 [2024-11-20 11:48:07.335022] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:34:01.704 [2024-11-20 11:48:07.335037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.704 [2024-11-20 11:48:07.335052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:34:01.704 [2024-11-20 11:48:07.335063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:34:01.704 [2024-11-20 11:48:07.335073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.704 [2024-11-20 11:48:07.359028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.704 [2024-11-20 11:48:07.359081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:34:01.704 [2024-11-20 11:48:07.359104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.929 ms 00:34:01.704 [2024-11-20 11:48:07.359116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.704 [2024-11-20 11:48:07.374156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.704 [2024-11-20 11:48:07.374193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:34:01.704 [2024-11-20 11:48:07.374207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:34:01.704 [2024-11-20 11:48:07.374218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:01.704 [2024-11-20 11:48:07.374325] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:34:01.704 [2024-11-20 11:48:07.374543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:01.704 [2024-11-20 11:48:07.374559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:34:01.704 [2024-11-20 11:48:07.374571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.219 ms 00:34:01.704 [2024-11-20 11:48:07.374581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.272 [2024-11-20 11:48:07.907850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.272 [2024-11-20 11:48:07.907925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:34:02.272 [2024-11-20 11:48:07.907942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 531.985 ms 00:34:02.272 [2024-11-20 11:48:07.907955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.272 [2024-11-20 11:48:07.913961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.272 [2024-11-20 11:48:07.914001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:34:02.272 [2024-11-20 11:48:07.914014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.275 ms 00:34:02.272 [2024-11-20 11:48:07.914025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.272 [2024-11-20 11:48:07.914407] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:34:02.272 [2024-11-20 11:48:07.914434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.272 [2024-11-20 11:48:07.914445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:34:02.272 [2024-11-20 11:48:07.914457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.369 ms 00:34:02.272 [2024-11-20 11:48:07.914467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.272 [2024-11-20 11:48:07.914513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.272 [2024-11-20 11:48:07.914525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:34:02.272 [2024-11-20 11:48:07.914537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:34:02.272 [2024-11-20 11:48:07.914546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.272 [2024-11-20 11:48:07.914589] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 540.261 ms, result 0 00:34:02.272 [2024-11-20 11:48:07.914633] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:34:02.272 [2024-11-20 11:48:07.914719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.272 [2024-11-20 11:48:07.914729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:34:02.272 [2024-11-20 11:48:07.914739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.087 ms 00:34:02.272 [2024-11-20 11:48:07.914749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.842 [2024-11-20 11:48:08.451197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.842 [2024-11-20 11:48:08.451263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:34:02.842 [2024-11-20 11:48:08.451280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 535.329 ms 00:34:02.842 [2024-11-20 11:48:08.451290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.842 [2024-11-20 11:48:08.456746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.842 [2024-11-20 11:48:08.456783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:34:02.842 [2024-11-20 11:48:08.456796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.982 ms 00:34:02.842 [2024-11-20 11:48:08.456806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.842 [2024-11-20 11:48:08.457247] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:34:02.842 [2024-11-20 11:48:08.457274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.842 [2024-11-20 11:48:08.457285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:34:02.842 [2024-11-20 11:48:08.457297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.437 ms 00:34:02.842 [2024-11-20 11:48:08.457307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.842 [2024-11-20 11:48:08.457340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.842 [2024-11-20 11:48:08.457353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:34:02.842 [2024-11-20 11:48:08.457364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:34:02.842 [2024-11-20 11:48:08.457374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.842 [2024-11-20 11:48:08.457413] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 542.775 ms, result 0 00:34:02.842 [2024-11-20 11:48:08.457456] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:02.842 [2024-11-20 11:48:08.457481] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:34:02.842 [2024-11-20 11:48:08.457495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.842 [2024-11-20 11:48:08.457507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:34:02.842 [2024-11-20 11:48:08.457517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1083.184 ms 00:34:02.842 [2024-11-20 11:48:08.457528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.842 [2024-11-20 11:48:08.457560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.843 [2024-11-20 11:48:08.457572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:34:02.843 [2024-11-20 11:48:08.457587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:34:02.843 [2024-11-20 11:48:08.457598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.843 [2024-11-20 11:48:08.469175] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:34:02.843 [2024-11-20 11:48:08.469315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.843 [2024-11-20 11:48:08.469329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:34:02.843 [2024-11-20 11:48:08.469341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.699 ms 00:34:02.843 [2024-11-20 11:48:08.469352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.843 [2024-11-20 11:48:08.469954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.843 [2024-11-20 11:48:08.469973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:34:02.843 [2024-11-20 11:48:08.469989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.526 ms 00:34:02.843 [2024-11-20 11:48:08.469999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.843 [2024-11-20 11:48:08.472068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.843 [2024-11-20 11:48:08.472088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:34:02.843 [2024-11-20 11:48:08.472100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.050 ms 00:34:02.843 [2024-11-20 11:48:08.472110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.843 [2024-11-20 11:48:08.472149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.843 [2024-11-20 11:48:08.472161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:34:02.843 [2024-11-20 11:48:08.472171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:34:02.843 [2024-11-20 11:48:08.472185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.843 [2024-11-20 11:48:08.472285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.843 [2024-11-20 11:48:08.472297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:34:02.843 [2024-11-20 11:48:08.472307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:34:02.843 [2024-11-20 11:48:08.472317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.843 [2024-11-20 11:48:08.472339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.843 [2024-11-20 11:48:08.472350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:34:02.843 [2024-11-20 11:48:08.472360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:34:02.843 [2024-11-20 11:48:08.472370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.843 [2024-11-20 11:48:08.472398] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:34:02.843 [2024-11-20 11:48:08.472413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.843 [2024-11-20 11:48:08.472423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:34:02.843 [2024-11-20 11:48:08.472433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:34:02.843 [2024-11-20 11:48:08.472443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.843 [2024-11-20 11:48:08.472505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:02.843 [2024-11-20 11:48:08.472520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:34:02.843 [2024-11-20 11:48:08.472530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:34:02.843 [2024-11-20 11:48:08.472557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:02.843 [2024-11-20 11:48:08.473593] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1431.111 ms, result 0 00:34:02.843 [2024-11-20 11:48:08.485966] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:02.843 [2024-11-20 11:48:08.501960] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:34:02.843 [2024-11-20 11:48:08.511654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:02.843 Validate MD5 checksum, iteration 1 00:34:02.843 11:48:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:02.843 11:48:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:34:02.843 11:48:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:02.843 11:48:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:34:02.843 11:48:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:34:02.843 11:48:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:34:02.843 11:48:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:34:02.843 11:48:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:02.843 11:48:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:34:02.843 11:48:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:02.843 11:48:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:02.843 11:48:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:02.843 11:48:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:02.843 11:48:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:02.843 11:48:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:03.102 [2024-11-20 11:48:08.629318] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:34:03.103 [2024-11-20 11:48:08.629442] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84566 ] 00:34:03.103 [2024-11-20 11:48:08.810578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.361 [2024-11-20 11:48:08.971180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:05.266  [2024-11-20T11:48:11.288Z] Copying: 688/1024 [MB] (688 MBps) [2024-11-20T11:48:14.572Z] Copying: 1024/1024 [MB] (average 676 MBps) 00:34:08.810 00:34:08.810 11:48:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:34:08.810 11:48:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:10.195 11:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:10.195 Validate MD5 checksum, iteration 2 00:34:10.195 11:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=cb854ce6b2442b029dc9c58a3bebaf87 00:34:10.195 11:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ cb854ce6b2442b029dc9c58a3bebaf87 != \c\b\8\5\4\c\e\6\b\2\4\4\2\b\0\2\9\d\c\9\c\5\8\a\3\b\e\b\a\f\8\7 ]] 00:34:10.195 11:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:10.195 11:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:10.195 11:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:34:10.196 11:48:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:10.196 11:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:10.196 11:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:10.196 11:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:10.196 11:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:10.196 11:48:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:10.196 [2024-11-20 11:48:15.869682] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:34:10.196 [2024-11-20 11:48:15.869863] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84644 ] 00:34:10.457 [2024-11-20 11:48:16.056767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.715 [2024-11-20 11:48:16.244947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:12.620  [2024-11-20T11:48:18.647Z] Copying: 644/1024 [MB] (644 MBps) [2024-11-20T11:48:21.185Z] Copying: 1024/1024 [MB] (average 644 MBps) 00:34:15.423 00:34:15.423 11:48:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:34:15.423 11:48:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:16.801 11:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:16.801 11:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=d01297a2c0237a5efe56cf76adadca85 00:34:16.801 11:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ d01297a2c0237a5efe56cf76adadca85 != \d\0\1\2\9\7\a\2\c\0\2\3\7\a\5\e\f\e\5\6\c\f\7\6\a\d\a\d\c\a\8\5 ]] 00:34:16.801 11:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:16.801 11:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:16.801 11:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:34:16.801 11:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:34:16.801 11:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:34:16.801 11:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:17.060 11:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:34:17.060 11:48:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:34:17.060 11:48:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:34:17.060 11:48:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:34:17.060 11:48:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84531 ]] 00:34:17.060 11:48:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84531 00:34:17.060 11:48:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84531 ']' 00:34:17.060 11:48:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84531 00:34:17.060 11:48:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:34:17.060 11:48:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:17.060 11:48:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84531 00:34:17.060 killing process with pid 84531 00:34:17.060 11:48:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:17.060 11:48:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:17.060 11:48:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84531' 00:34:17.060 11:48:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84531 00:34:17.060 11:48:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84531 00:34:18.442 [2024-11-20 11:48:23.821696] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:34:18.442 [2024-11-20 11:48:23.840892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.442 [2024-11-20 11:48:23.840930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:34:18.442 [2024-11-20 11:48:23.840944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:34:18.442 [2024-11-20 11:48:23.840954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.442 [2024-11-20 11:48:23.840975] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:34:18.442 [2024-11-20 11:48:23.845199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.442 [2024-11-20 11:48:23.845225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:34:18.442 [2024-11-20 11:48:23.845237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.209 ms 00:34:18.442 [2024-11-20 11:48:23.845252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.442 [2024-11-20 11:48:23.845467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.442 [2024-11-20 11:48:23.845481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:34:18.442 [2024-11-20 11:48:23.845502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.192 ms 00:34:18.442 [2024-11-20 11:48:23.845512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.442 [2024-11-20 11:48:23.846758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.442 [2024-11-20 11:48:23.846788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:34:18.442 [2024-11-20 11:48:23.846800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.228 ms 00:34:18.442 [2024-11-20 11:48:23.846811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.442 [2024-11-20 11:48:23.847768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.442 [2024-11-20 11:48:23.847793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:34:18.442 [2024-11-20 11:48:23.847804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.918 ms 00:34:18.442 [2024-11-20 11:48:23.847815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.442 [2024-11-20 11:48:23.862583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.442 [2024-11-20 11:48:23.862616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:34:18.442 [2024-11-20 11:48:23.862630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.715 ms 00:34:18.442 [2024-11-20 11:48:23.862646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.442 [2024-11-20 11:48:23.870443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.442 [2024-11-20 11:48:23.870480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:34:18.442 [2024-11-20 11:48:23.870492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.760 ms 00:34:18.442 [2024-11-20 11:48:23.870502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.442 [2024-11-20 11:48:23.870605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.442 [2024-11-20 11:48:23.870619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:34:18.442 [2024-11-20 11:48:23.870629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:34:18.442 [2024-11-20 11:48:23.870640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.442 [2024-11-20 11:48:23.885726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.442 [2024-11-20 11:48:23.885758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:34:18.442 [2024-11-20 11:48:23.885771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.062 ms 00:34:18.442 [2024-11-20 11:48:23.885781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.442 [2024-11-20 11:48:23.900930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.442 [2024-11-20 11:48:23.900963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:34:18.442 [2024-11-20 11:48:23.900975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.114 ms 00:34:18.442 [2024-11-20 11:48:23.900985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.442 [2024-11-20 11:48:23.915432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.442 [2024-11-20 11:48:23.915483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:34:18.442 [2024-11-20 11:48:23.915513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.411 ms 00:34:18.442 [2024-11-20 11:48:23.915523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.442 [2024-11-20 11:48:23.930193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.442 [2024-11-20 11:48:23.930224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:34:18.442 [2024-11-20 11:48:23.930236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.592 ms 00:34:18.442 [2024-11-20 11:48:23.930245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.442 [2024-11-20 11:48:23.930287] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:34:18.442 [2024-11-20 11:48:23.930304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:34:18.442 [2024-11-20 11:48:23.930318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:34:18.442 [2024-11-20 11:48:23.930329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:34:18.442 [2024-11-20 11:48:23.930340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:18.442 [2024-11-20 11:48:23.930351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:18.442 [2024-11-20 11:48:23.930361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:18.442 [2024-11-20 11:48:23.930372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:18.442 [2024-11-20 11:48:23.930383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:18.442 [2024-11-20 11:48:23.930393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:18.442 [2024-11-20 11:48:23.930404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:18.442 [2024-11-20 11:48:23.930414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:18.442 [2024-11-20 11:48:23.930425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:18.442 [2024-11-20 11:48:23.930435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:18.442 [2024-11-20 11:48:23.930446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:18.442 [2024-11-20 11:48:23.930456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:18.442 [2024-11-20 11:48:23.930467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:18.443 [2024-11-20 11:48:23.930486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:18.443 [2024-11-20 11:48:23.930497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:18.443 [2024-11-20 11:48:23.930509] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:34:18.443 [2024-11-20 11:48:23.930519] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d65cc6e7-c7de-4cca-b53b-ed23fb5acccf 00:34:18.443 [2024-11-20 11:48:23.930531] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:34:18.443 [2024-11-20 11:48:23.930541] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:34:18.443 [2024-11-20 11:48:23.930551] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:34:18.443 [2024-11-20 11:48:23.930562] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:34:18.443 [2024-11-20 11:48:23.930572] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:34:18.443 [2024-11-20 11:48:23.930582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:34:18.443 [2024-11-20 11:48:23.930592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:34:18.443 [2024-11-20 11:48:23.930601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:34:18.443 [2024-11-20 11:48:23.930610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:34:18.443 [2024-11-20 11:48:23.930622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.443 [2024-11-20 11:48:23.930638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:34:18.443 [2024-11-20 11:48:23.930648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.336 ms 00:34:18.443 [2024-11-20 11:48:23.930659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.443 [2024-11-20 11:48:23.950821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.443 [2024-11-20 11:48:23.950851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:34:18.443 [2024-11-20 11:48:23.950879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.144 ms 00:34:18.443 [2024-11-20 11:48:23.950889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.443 [2024-11-20 11:48:23.951453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:18.443 [2024-11-20 11:48:23.951486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:34:18.443 [2024-11-20 11:48:23.951498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.527 ms 00:34:18.443 [2024-11-20 11:48:23.951508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.443 [2024-11-20 11:48:24.017660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:18.443 [2024-11-20 11:48:24.017699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:18.443 [2024-11-20 11:48:24.017713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:18.443 [2024-11-20 11:48:24.017724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.443 [2024-11-20 11:48:24.017772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:18.443 [2024-11-20 11:48:24.017784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:18.443 [2024-11-20 11:48:24.017794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:18.443 [2024-11-20 11:48:24.017805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.443 [2024-11-20 11:48:24.017887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:18.443 [2024-11-20 11:48:24.017902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:18.443 [2024-11-20 11:48:24.017913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:18.443 [2024-11-20 11:48:24.017923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.443 [2024-11-20 11:48:24.017941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:18.443 [2024-11-20 11:48:24.017965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:18.443 [2024-11-20 11:48:24.017976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:18.443 [2024-11-20 11:48:24.017986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.443 [2024-11-20 11:48:24.142961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:18.443 [2024-11-20 11:48:24.143009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:18.443 [2024-11-20 11:48:24.143042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:18.443 [2024-11-20 11:48:24.143053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.702 [2024-11-20 11:48:24.243443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:18.702 [2024-11-20 11:48:24.243529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:18.702 [2024-11-20 11:48:24.243544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:18.702 [2024-11-20 11:48:24.243554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.702 [2024-11-20 11:48:24.243669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:18.702 [2024-11-20 11:48:24.243682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:18.702 [2024-11-20 11:48:24.243693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:18.702 [2024-11-20 11:48:24.243703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.702 [2024-11-20 11:48:24.243773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:18.702 [2024-11-20 11:48:24.243785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:18.702 [2024-11-20 11:48:24.243800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:18.702 [2024-11-20 11:48:24.243821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.702 [2024-11-20 11:48:24.243931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:18.702 [2024-11-20 11:48:24.243950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:18.702 [2024-11-20 11:48:24.243961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:18.702 [2024-11-20 11:48:24.243971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.702 [2024-11-20 11:48:24.244010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:18.702 [2024-11-20 11:48:24.244022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:34:18.702 [2024-11-20 11:48:24.244032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:18.702 [2024-11-20 11:48:24.244046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.702 [2024-11-20 11:48:24.244085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:18.702 [2024-11-20 11:48:24.244097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:18.702 [2024-11-20 11:48:24.244108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:18.702 [2024-11-20 11:48:24.244118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.702 [2024-11-20 11:48:24.244170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:18.702 [2024-11-20 11:48:24.244183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:18.702 [2024-11-20 11:48:24.244197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:18.702 [2024-11-20 11:48:24.244207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:18.702 [2024-11-20 11:48:24.244326] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 403.396 ms, result 0 00:34:20.076 11:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:34:20.076 11:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:20.076 11:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:34:20.076 11:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:34:20.076 11:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:34:20.076 11:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:34:20.076 Remove shared memory files 00:34:20.076 11:48:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:34:20.076 11:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:20.076 11:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:34:20.076 11:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:34:20.076 11:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84298 00:34:20.076 11:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:20.076 11:48:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:34:20.076 00:34:20.076 real 1m30.617s 00:34:20.076 user 2m5.851s 00:34:20.076 sys 0m23.090s 00:34:20.076 11:48:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:20.076 11:48:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:20.076 ************************************ 00:34:20.076 END TEST ftl_upgrade_shutdown 00:34:20.076 ************************************ 00:34:20.076 11:48:25 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:34:20.076 11:48:25 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:20.076 11:48:25 ftl -- ftl/ftl.sh@14 -- # killprocess 77509 00:34:20.076 11:48:25 ftl -- common/autotest_common.sh@954 -- # '[' -z 77509 ']' 00:34:20.076 11:48:25 ftl -- common/autotest_common.sh@958 -- # kill -0 77509 00:34:20.076 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77509) - No such process 00:34:20.076 Process with pid 77509 is not found 00:34:20.076 11:48:25 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 77509 is not found' 00:34:20.076 11:48:25 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:20.076 11:48:25 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84775 00:34:20.076 11:48:25 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:20.076 11:48:25 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84775 00:34:20.076 11:48:25 ftl -- common/autotest_common.sh@835 -- # '[' -z 84775 ']' 00:34:20.076 11:48:25 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.076 11:48:25 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:20.076 11:48:25 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.076 11:48:25 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:20.076 11:48:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:20.076 [2024-11-20 11:48:25.668550] Starting SPDK v25.01-pre git sha1 f86091626 / DPDK 24.03.0 initialization... 00:34:20.076 [2024-11-20 11:48:25.668671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84775 ] 00:34:20.333 [2024-11-20 11:48:25.836971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:20.333 [2024-11-20 11:48:25.950515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:21.268 11:48:26 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:21.268 11:48:26 ftl -- common/autotest_common.sh@868 -- # return 0 00:34:21.268 11:48:26 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:21.527 nvme0n1 00:34:21.527 11:48:27 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:21.527 11:48:27 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:21.527 11:48:27 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:21.527 11:48:27 ftl -- ftl/common.sh@28 -- # stores=4cdf7ccf-ab3c-44c7-9965-b87d81d31ae5 00:34:21.528 11:48:27 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:21.528 11:48:27 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4cdf7ccf-ab3c-44c7-9965-b87d81d31ae5 00:34:21.787 11:48:27 ftl -- ftl/ftl.sh@23 -- # killprocess 84775 00:34:21.787 11:48:27 ftl -- common/autotest_common.sh@954 -- # '[' -z 84775 ']' 00:34:21.787 11:48:27 ftl -- common/autotest_common.sh@958 -- # kill -0 84775 00:34:21.787 11:48:27 ftl -- common/autotest_common.sh@959 -- # uname 00:34:21.787 11:48:27 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:21.787 11:48:27 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84775 00:34:21.787 11:48:27 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:21.787 11:48:27 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:21.787 killing process with pid 84775 00:34:21.787 11:48:27 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84775' 00:34:21.787 11:48:27 ftl -- common/autotest_common.sh@973 -- # kill 84775 00:34:21.787 11:48:27 ftl -- common/autotest_common.sh@978 -- # wait 84775 00:34:24.319 11:48:29 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:24.319 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:24.579 Waiting for block devices as requested 00:34:24.579 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:24.838 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:24.838 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:24.838 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:30.132 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:30.132 Remove shared memory files 00:34:30.132 11:48:35 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:30.132 11:48:35 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:30.132 11:48:35 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:30.132 11:48:35 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:30.132 11:48:35 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:30.132 11:48:35 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:30.132 11:48:35 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:30.132 ************************************ 00:34:30.132 END TEST ftl 00:34:30.132 ************************************ 00:34:30.132 00:34:30.132 real 10m47.432s 00:34:30.132 user 13m30.734s 00:34:30.132 sys 1m32.236s 00:34:30.132 11:48:35 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:30.132 11:48:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:30.132 11:48:35 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:30.132 11:48:35 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:30.132 11:48:35 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:30.132 11:48:35 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:30.132 11:48:35 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:30.132 11:48:35 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:30.132 11:48:35 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:30.132 11:48:35 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:30.132 11:48:35 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:30.132 11:48:35 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:30.132 11:48:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:30.132 11:48:35 -- common/autotest_common.sh@10 -- # set +x 00:34:30.132 11:48:35 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:30.132 11:48:35 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:30.132 11:48:35 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:30.132 11:48:35 -- common/autotest_common.sh@10 -- # set +x 00:34:32.669 INFO: APP EXITING 00:34:32.669 INFO: killing all VMs 00:34:32.669 INFO: killing vhost app 00:34:32.669 INFO: EXIT DONE 00:34:32.669 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:33.238 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:33.238 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:33.238 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:33.238 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:33.806 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:34.066 Cleaning 00:34:34.066 Removing: /var/run/dpdk/spdk0/config 00:34:34.066 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:34.066 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:34.066 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:34.066 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:34.066 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:34.066 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:34.066 Removing: /var/run/dpdk/spdk0 00:34:34.066 Removing: /var/run/dpdk/spdk_pid57853 00:34:34.066 Removing: /var/run/dpdk/spdk_pid58110 00:34:34.066 Removing: /var/run/dpdk/spdk_pid58345 00:34:34.066 Removing: /var/run/dpdk/spdk_pid58455 00:34:34.066 Removing: /var/run/dpdk/spdk_pid58516 00:34:34.066 Removing: /var/run/dpdk/spdk_pid58655 00:34:34.066 Removing: /var/run/dpdk/spdk_pid58683 00:34:34.066 Removing: /var/run/dpdk/spdk_pid58894 00:34:34.066 Removing: /var/run/dpdk/spdk_pid59012 00:34:34.066 Removing: /var/run/dpdk/spdk_pid59130 00:34:34.066 Removing: /var/run/dpdk/spdk_pid59258 00:34:34.066 Removing: /var/run/dpdk/spdk_pid59371 00:34:34.066 Removing: /var/run/dpdk/spdk_pid59415 00:34:34.066 Removing: /var/run/dpdk/spdk_pid59455 00:34:34.066 Removing: /var/run/dpdk/spdk_pid59531 00:34:34.066 Removing: /var/run/dpdk/spdk_pid59643 00:34:34.066 Removing: /var/run/dpdk/spdk_pid60123 00:34:34.066 Removing: /var/run/dpdk/spdk_pid60209 00:34:34.066 Removing: /var/run/dpdk/spdk_pid60290 00:34:34.066 Removing: /var/run/dpdk/spdk_pid60316 00:34:34.066 Removing: /var/run/dpdk/spdk_pid60486 00:34:34.066 Removing: /var/run/dpdk/spdk_pid60513 00:34:34.066 Removing: /var/run/dpdk/spdk_pid60682 00:34:34.066 Removing: /var/run/dpdk/spdk_pid60699 00:34:34.066 Removing: /var/run/dpdk/spdk_pid60774 00:34:34.066 Removing: /var/run/dpdk/spdk_pid60803 00:34:34.326 Removing: /var/run/dpdk/spdk_pid60873 00:34:34.326 Removing: /var/run/dpdk/spdk_pid60896 00:34:34.326 Removing: /var/run/dpdk/spdk_pid61108 00:34:34.326 Removing: /var/run/dpdk/spdk_pid61139 00:34:34.326 Removing: /var/run/dpdk/spdk_pid61228 00:34:34.326 Removing: /var/run/dpdk/spdk_pid61428 00:34:34.326 Removing: /var/run/dpdk/spdk_pid61528 00:34:34.326 Removing: /var/run/dpdk/spdk_pid61576 00:34:34.326 Removing: /var/run/dpdk/spdk_pid62052 00:34:34.326 Removing: /var/run/dpdk/spdk_pid62156 00:34:34.326 Removing: /var/run/dpdk/spdk_pid62276 00:34:34.326 Removing: /var/run/dpdk/spdk_pid62329 00:34:34.326 Removing: /var/run/dpdk/spdk_pid62360 00:34:34.326 Removing: /var/run/dpdk/spdk_pid62444 00:34:34.326 Removing: /var/run/dpdk/spdk_pid63091 00:34:34.326 Removing: /var/run/dpdk/spdk_pid63139 00:34:34.326 Removing: /var/run/dpdk/spdk_pid63696 00:34:34.326 Removing: /var/run/dpdk/spdk_pid63801 00:34:34.326 Removing: /var/run/dpdk/spdk_pid63922 00:34:34.326 Removing: /var/run/dpdk/spdk_pid63986 00:34:34.326 Removing: /var/run/dpdk/spdk_pid64012 00:34:34.326 Removing: /var/run/dpdk/spdk_pid64041 00:34:34.326 Removing: /var/run/dpdk/spdk_pid65947 00:34:34.326 Removing: /var/run/dpdk/spdk_pid66101 00:34:34.326 Removing: /var/run/dpdk/spdk_pid66105 00:34:34.326 Removing: /var/run/dpdk/spdk_pid66122 00:34:34.326 Removing: /var/run/dpdk/spdk_pid66169 00:34:34.326 Removing: /var/run/dpdk/spdk_pid66173 00:34:34.326 Removing: /var/run/dpdk/spdk_pid66185 00:34:34.326 Removing: /var/run/dpdk/spdk_pid66231 00:34:34.326 Removing: /var/run/dpdk/spdk_pid66240 00:34:34.326 Removing: /var/run/dpdk/spdk_pid66252 00:34:34.326 Removing: /var/run/dpdk/spdk_pid66297 00:34:34.326 Removing: /var/run/dpdk/spdk_pid66301 00:34:34.326 Removing: /var/run/dpdk/spdk_pid66313 00:34:34.326 Removing: /var/run/dpdk/spdk_pid67741 00:34:34.326 Removing: /var/run/dpdk/spdk_pid67860 00:34:34.326 Removing: /var/run/dpdk/spdk_pid69284 00:34:34.326 Removing: /var/run/dpdk/spdk_pid71027 00:34:34.326 Removing: /var/run/dpdk/spdk_pid71118 00:34:34.326 Removing: /var/run/dpdk/spdk_pid71204 00:34:34.326 Removing: /var/run/dpdk/spdk_pid71320 00:34:34.326 Removing: /var/run/dpdk/spdk_pid71423 00:34:34.326 Removing: /var/run/dpdk/spdk_pid71524 00:34:34.326 Removing: /var/run/dpdk/spdk_pid71617 00:34:34.326 Removing: /var/run/dpdk/spdk_pid71702 00:34:34.326 Removing: /var/run/dpdk/spdk_pid71813 00:34:34.326 Removing: /var/run/dpdk/spdk_pid71916 00:34:34.326 Removing: /var/run/dpdk/spdk_pid72012 00:34:34.326 Removing: /var/run/dpdk/spdk_pid72104 00:34:34.326 Removing: /var/run/dpdk/spdk_pid72190 00:34:34.326 Removing: /var/run/dpdk/spdk_pid72306 00:34:34.326 Removing: /var/run/dpdk/spdk_pid72398 00:34:34.326 Removing: /var/run/dpdk/spdk_pid72506 00:34:34.326 Removing: /var/run/dpdk/spdk_pid72601 00:34:34.326 Removing: /var/run/dpdk/spdk_pid72676 00:34:34.326 Removing: /var/run/dpdk/spdk_pid72797 00:34:34.326 Removing: /var/run/dpdk/spdk_pid72900 00:34:34.326 Removing: /var/run/dpdk/spdk_pid72996 00:34:34.326 Removing: /var/run/dpdk/spdk_pid73087 00:34:34.326 Removing: /var/run/dpdk/spdk_pid73164 00:34:34.326 Removing: /var/run/dpdk/spdk_pid73246 00:34:34.326 Removing: /var/run/dpdk/spdk_pid73330 00:34:34.326 Removing: /var/run/dpdk/spdk_pid73444 00:34:34.326 Removing: /var/run/dpdk/spdk_pid73540 00:34:34.326 Removing: /var/run/dpdk/spdk_pid73646 00:34:34.326 Removing: /var/run/dpdk/spdk_pid73730 00:34:34.327 Removing: /var/run/dpdk/spdk_pid73811 00:34:34.327 Removing: /var/run/dpdk/spdk_pid73891 00:34:34.327 Removing: /var/run/dpdk/spdk_pid73971 00:34:34.586 Removing: /var/run/dpdk/spdk_pid74080 00:34:34.586 Removing: /var/run/dpdk/spdk_pid74181 00:34:34.586 Removing: /var/run/dpdk/spdk_pid74332 00:34:34.586 Removing: /var/run/dpdk/spdk_pid74634 00:34:34.586 Removing: /var/run/dpdk/spdk_pid74676 00:34:34.586 Removing: /var/run/dpdk/spdk_pid75173 00:34:34.586 Removing: /var/run/dpdk/spdk_pid75358 00:34:34.586 Removing: /var/run/dpdk/spdk_pid75463 00:34:34.586 Removing: /var/run/dpdk/spdk_pid75584 00:34:34.586 Removing: /var/run/dpdk/spdk_pid75643 00:34:34.586 Removing: /var/run/dpdk/spdk_pid75669 00:34:34.586 Removing: /var/run/dpdk/spdk_pid75957 00:34:34.586 Removing: /var/run/dpdk/spdk_pid76030 00:34:34.586 Removing: /var/run/dpdk/spdk_pid76127 00:34:34.586 Removing: /var/run/dpdk/spdk_pid76563 00:34:34.586 Removing: /var/run/dpdk/spdk_pid76712 00:34:34.586 Removing: /var/run/dpdk/spdk_pid77509 00:34:34.586 Removing: /var/run/dpdk/spdk_pid77664 00:34:34.586 Removing: /var/run/dpdk/spdk_pid77886 00:34:34.586 Removing: /var/run/dpdk/spdk_pid77996 00:34:34.586 Removing: /var/run/dpdk/spdk_pid78327 00:34:34.586 Removing: /var/run/dpdk/spdk_pid78592 00:34:34.586 Removing: /var/run/dpdk/spdk_pid78945 00:34:34.586 Removing: /var/run/dpdk/spdk_pid79154 00:34:34.586 Removing: /var/run/dpdk/spdk_pid79277 00:34:34.586 Removing: /var/run/dpdk/spdk_pid79346 00:34:34.586 Removing: /var/run/dpdk/spdk_pid79473 00:34:34.586 Removing: /var/run/dpdk/spdk_pid79504 00:34:34.586 Removing: /var/run/dpdk/spdk_pid79572 00:34:34.586 Removing: /var/run/dpdk/spdk_pid79757 00:34:34.586 Removing: /var/run/dpdk/spdk_pid80009 00:34:34.586 Removing: /var/run/dpdk/spdk_pid80378 00:34:34.586 Removing: /var/run/dpdk/spdk_pid80766 00:34:34.586 Removing: /var/run/dpdk/spdk_pid81157 00:34:34.586 Removing: /var/run/dpdk/spdk_pid81604 00:34:34.586 Removing: /var/run/dpdk/spdk_pid81752 00:34:34.586 Removing: /var/run/dpdk/spdk_pid81844 00:34:34.586 Removing: /var/run/dpdk/spdk_pid82425 00:34:34.586 Removing: /var/run/dpdk/spdk_pid82499 00:34:34.586 Removing: /var/run/dpdk/spdk_pid82897 00:34:34.586 Removing: /var/run/dpdk/spdk_pid83245 00:34:34.586 Removing: /var/run/dpdk/spdk_pid83706 00:34:34.586 Removing: /var/run/dpdk/spdk_pid83845 00:34:34.586 Removing: /var/run/dpdk/spdk_pid83911 00:34:34.586 Removing: /var/run/dpdk/spdk_pid83975 00:34:34.586 Removing: /var/run/dpdk/spdk_pid84035 00:34:34.587 Removing: /var/run/dpdk/spdk_pid84098 00:34:34.587 Removing: /var/run/dpdk/spdk_pid84298 00:34:34.587 Removing: /var/run/dpdk/spdk_pid84392 00:34:34.587 Removing: /var/run/dpdk/spdk_pid84456 00:34:34.587 Removing: /var/run/dpdk/spdk_pid84531 00:34:34.587 Removing: /var/run/dpdk/spdk_pid84566 00:34:34.587 Removing: /var/run/dpdk/spdk_pid84644 00:34:34.587 Removing: /var/run/dpdk/spdk_pid84775 00:34:34.587 Clean 00:34:34.846 11:48:40 -- common/autotest_common.sh@1453 -- # return 0 00:34:34.846 11:48:40 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:34.846 11:48:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:34.846 11:48:40 -- common/autotest_common.sh@10 -- # set +x 00:34:34.846 11:48:40 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:34.846 11:48:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:34.846 11:48:40 -- common/autotest_common.sh@10 -- # set +x 00:34:34.846 11:48:40 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:34.846 11:48:40 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:34.846 11:48:40 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:34.846 11:48:40 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:34.846 11:48:40 -- spdk/autotest.sh@398 -- # hostname 00:34:34.846 11:48:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:35.106 geninfo: WARNING: invalid characters removed from testname! 00:35:01.725 11:49:04 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:01.725 11:49:07 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:03.631 11:49:09 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:06.184 11:49:11 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:08.089 11:49:13 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:09.995 11:49:15 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:12.530 11:49:17 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:12.530 11:49:17 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:12.530 11:49:17 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:35:12.530 11:49:17 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:12.530 11:49:17 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:12.530 11:49:17 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:12.530 + [[ -n 5301 ]] 00:35:12.530 + sudo kill 5301 00:35:12.546 [Pipeline] } 00:35:12.562 [Pipeline] // timeout 00:35:12.568 [Pipeline] } 00:35:12.584 [Pipeline] // stage 00:35:12.590 [Pipeline] } 00:35:12.606 [Pipeline] // catchError 00:35:12.615 [Pipeline] stage 00:35:12.618 [Pipeline] { (Stop VM) 00:35:12.633 [Pipeline] sh 00:35:12.919 + vagrant halt 00:35:16.209 ==> default: Halting domain... 00:35:22.829 [Pipeline] sh 00:35:23.109 + vagrant destroy -f 00:35:26.394 ==> default: Removing domain... 00:35:26.665 [Pipeline] sh 00:35:26.948 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:26.958 [Pipeline] } 00:35:26.976 [Pipeline] // stage 00:35:26.982 [Pipeline] } 00:35:26.997 [Pipeline] // dir 00:35:27.004 [Pipeline] } 00:35:27.020 [Pipeline] // wrap 00:35:27.028 [Pipeline] } 00:35:27.042 [Pipeline] // catchError 00:35:27.053 [Pipeline] stage 00:35:27.055 [Pipeline] { (Epilogue) 00:35:27.069 [Pipeline] sh 00:35:27.354 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:32.639 [Pipeline] catchError 00:35:32.641 [Pipeline] { 00:35:32.654 [Pipeline] sh 00:35:32.985 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:33.244 Artifacts sizes are good 00:35:33.252 [Pipeline] } 00:35:33.267 [Pipeline] // catchError 00:35:33.279 [Pipeline] archiveArtifacts 00:35:33.286 Archiving artifacts 00:35:33.401 [Pipeline] cleanWs 00:35:33.414 [WS-CLEANUP] Deleting project workspace... 00:35:33.414 [WS-CLEANUP] Deferred wipeout is used... 00:35:33.420 [WS-CLEANUP] done 00:35:33.422 [Pipeline] } 00:35:33.438 [Pipeline] // stage 00:35:33.444 [Pipeline] } 00:35:33.458 [Pipeline] // node 00:35:33.464 [Pipeline] End of Pipeline 00:35:33.550 Finished: SUCCESS